00:00:00.001 Started by upstream project "autotest-nightly" build number 3622 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3004 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.014 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.026 Fetching changes from the remote Git repository 00:00:00.027 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.040 Using shallow fetch with depth 1 00:00:00.041 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.041 > git --version # timeout=10 00:00:00.056 > git --version # 'git version 2.39.2' 00:00:00.056 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.056 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.056 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.393 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.404 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.413 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:02.413 > git config core.sparsecheckout # timeout=10 00:00:02.423 > git read-tree -mu HEAD # timeout=10 00:00:02.437 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:02.454 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:02.454 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:02.525 [Pipeline] Start of Pipeline 00:00:02.538 [Pipeline] library 00:00:02.540 Loading library shm_lib@master 00:00:02.540 Library shm_lib@master is cached. Copying from home. 00:00:02.560 [Pipeline] node 00:00:02.567 Running on FCP03 in /var/jenkins/workspace/dsa-phy-autotest 00:00:02.571 [Pipeline] { 00:00:02.581 [Pipeline] catchError 00:00:02.582 [Pipeline] { 00:00:02.595 [Pipeline] wrap 00:00:02.603 [Pipeline] { 00:00:02.609 [Pipeline] stage 00:00:02.610 [Pipeline] { (Prologue) 00:00:02.760 [Pipeline] sh 00:00:03.045 + logger -p user.info -t JENKINS-CI 00:00:03.062 [Pipeline] echo 00:00:03.064 Node: FCP03 00:00:03.072 [Pipeline] sh 00:00:03.375 [Pipeline] setCustomBuildProperty 00:00:03.387 [Pipeline] echo 00:00:03.388 Cleanup processes 00:00:03.393 [Pipeline] sh 00:00:03.679 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.679 1099826 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.692 [Pipeline] sh 00:00:03.979 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:03.979 ++ grep -v 'sudo pgrep' 00:00:03.979 ++ awk '{print $1}' 00:00:03.979 + sudo kill -9 00:00:03.979 + true 00:00:03.993 [Pipeline] cleanWs 00:00:04.004 [WS-CLEANUP] Deleting project workspace... 00:00:04.004 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.010 [WS-CLEANUP] done 00:00:04.013 [Pipeline] setCustomBuildProperty 00:00:04.024 [Pipeline] sh 00:00:04.310 + sudo git config --global --replace-all safe.directory '*' 00:00:04.365 [Pipeline] nodesByLabel 00:00:04.366 Could not find any nodes with 'sorcerer' label 00:00:04.370 [Pipeline] retry 00:00:04.372 [Pipeline] { 00:00:04.386 [Pipeline] checkout 00:00:04.392 The recommended git tool is: git 00:00:04.403 using credential 00000000-0000-0000-0000-000000000002 00:00:04.409 Cloning the remote Git repository 00:00:04.412 Honoring refspec on initial clone 00:00:04.418 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.418 > git init /var/jenkins/workspace/dsa-phy-autotest/jbp # timeout=10 00:00:04.426 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:04.426 > git --version # timeout=10 00:00:04.429 > git --version # 'git version 2.43.0' 00:00:04.429 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:04.430 Setting http proxy: proxy-dmz.intel.com:911 00:00:04.430 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:00:12.184 Avoid second fetch 00:00:12.201 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:12.301 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:12.308 [Pipeline] } 00:00:12.329 [Pipeline] // retry 00:00:12.340 [Pipeline] nodesByLabel 00:00:12.342 Could not find any nodes with 'sorcerer' label 00:00:12.347 [Pipeline] retry 00:00:12.349 [Pipeline] { 00:00:12.369 [Pipeline] checkout 00:00:12.377 The recommended git tool is: NONE 00:00:12.400 using credential 00000000-0000-0000-0000-000000000002 00:00:12.405 Cloning the remote Git repository 00:00:12.408 Honoring refspec on initial clone 00:00:12.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:12.176 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:12.189 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.199 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.207 > git config core.sparsecheckout # timeout=10 00:00:12.210 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:12.414 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:12.414 > git init /var/jenkins/workspace/dsa-phy-autotest/spdk # timeout=10 00:00:12.420 Using reference repository: /var/ci_repos/spdk_multi 00:00:12.420 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:12.420 > git --version # timeout=10 00:00:12.423 > git --version # 'git version 2.43.0' 00:00:12.423 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:12.424 Setting http proxy: proxy-dmz.intel.com:911 00:00:12.424 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/heads/master +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:33.155 Avoid second fetch 00:00:33.170 Checking out Revision 3f2c8979187809f9b3b0766ead4b91dc70fd73c6 (FETCH_HEAD) 00:00:33.427 Commit message: "event: switch reactors to poll mode before stopping" 00:00:33.437 First time build. Skipping changelog. 00:00:33.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:33.143 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:33.147 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:33.160 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:33.169 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:33.176 > git config core.sparsecheckout # timeout=10 00:00:33.179 > git checkout -f 3f2c8979187809f9b3b0766ead4b91dc70fd73c6 # timeout=10 00:00:33.432 > git rev-list --no-walk 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 # timeout=10 00:00:33.446 > git remote # timeout=10 00:00:33.451 > git submodule init # timeout=10 00:00:33.524 > git submodule sync # timeout=10 00:00:33.593 > git config --get remote.origin.url # timeout=10 00:00:33.603 > git submodule init # timeout=10 00:00:33.680 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:33.686 > git config --get submodule.dpdk.url # timeout=10 00:00:33.690 > git remote # timeout=10 00:00:33.693 > git config --get remote.origin.url # timeout=10 00:00:33.698 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:33.701 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:33.706 > git remote # timeout=10 00:00:33.710 > git config --get remote.origin.url # timeout=10 00:00:33.714 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:33.718 > git config --get submodule.isa-l.url # timeout=10 00:00:33.723 > git remote # timeout=10 00:00:33.728 > git config --get remote.origin.url # timeout=10 00:00:33.731 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:33.733 > git config --get submodule.ocf.url # timeout=10 00:00:33.738 > git remote # timeout=10 00:00:33.742 > git config --get remote.origin.url # timeout=10 00:00:33.746 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:33.749 > git config --get submodule.libvfio-user.url # timeout=10 00:00:33.753 > git remote # timeout=10 00:00:33.758 > git config --get remote.origin.url # timeout=10 00:00:33.761 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:33.765 > git config --get submodule.xnvme.url # timeout=10 00:00:33.768 > git remote # timeout=10 00:00:33.772 > git config --get remote.origin.url # timeout=10 00:00:33.775 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:33.779 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:33.782 > git remote # timeout=10 00:00:33.786 > git config --get remote.origin.url # timeout=10 00:00:33.790 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.797 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 Setting http proxy: proxy-dmz.intel.com:911 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:33.798 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:46.010 [Pipeline] } 00:00:46.029 [Pipeline] // retry 00:00:46.036 [Pipeline] sh 00:00:46.383 + git -C spdk log --oneline -n5 00:00:46.383 3f2c8979187 event: switch reactors to poll mode before stopping 00:00:46.383 443e1ea3147 setup.sh: emit command line to /dev/kmsg on Linux 00:00:46.383 a1264177cd2 pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:00:46.383 af95268b18e pkgdep/git: Adjust QAT driver to kernel >= 6.8.x 00:00:46.383 5e75b9137ab scripts/pkgdep: Simplify mdl installation 00:00:46.407 [Pipeline] } 00:00:46.429 [Pipeline] // stage 00:00:46.440 [Pipeline] stage 00:00:46.443 [Pipeline] { (Prepare) 00:00:46.462 [Pipeline] writeFile 00:00:46.479 [Pipeline] sh 00:00:46.763 + logger -p user.info -t JENKINS-CI 00:00:46.774 [Pipeline] sh 00:00:47.055 + logger -p user.info -t JENKINS-CI 00:00:47.066 [Pipeline] sh 00:00:47.347 + cat autorun-spdk.conf 00:00:47.347 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.347 SPDK_TEST_ACCEL_DSA=1 00:00:47.347 SPDK_TEST_ACCEL_IAA=1 00:00:47.347 SPDK_TEST_NVMF=1 00:00:47.347 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.347 SPDK_RUN_ASAN=1 00:00:47.347 SPDK_RUN_UBSAN=1 00:00:47.354 RUN_NIGHTLY=1 00:00:47.359 [Pipeline] readFile 00:00:47.380 [Pipeline] withEnv 00:00:47.382 [Pipeline] { 00:00:47.392 [Pipeline] sh 00:00:47.675 + set -ex 00:00:47.675 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:00:47.675 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:47.675 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.675 ++ SPDK_TEST_ACCEL_DSA=1 00:00:47.675 ++ SPDK_TEST_ACCEL_IAA=1 00:00:47.675 ++ SPDK_TEST_NVMF=1 00:00:47.675 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.675 ++ SPDK_RUN_ASAN=1 00:00:47.675 ++ SPDK_RUN_UBSAN=1 00:00:47.675 ++ RUN_NIGHTLY=1 00:00:47.675 + case $SPDK_TEST_NVMF_NICS in 00:00:47.675 + DRIVERS= 00:00:47.675 + [[ -n '' ]] 00:00:47.675 + exit 0 00:00:47.684 [Pipeline] } 00:00:47.701 [Pipeline] // withEnv 00:00:47.706 [Pipeline] } 00:00:47.721 [Pipeline] // stage 00:00:47.730 [Pipeline] catchError 00:00:47.732 [Pipeline] { 00:00:47.745 [Pipeline] timeout 00:00:47.746 Timeout set to expire in 50 min 00:00:47.747 [Pipeline] { 00:00:47.761 [Pipeline] stage 00:00:47.763 [Pipeline] { (Tests) 00:00:47.777 [Pipeline] sh 00:00:48.058 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:00:48.058 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:00:48.058 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:00:48.058 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:00:48.058 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:48.058 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:00:48.058 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:00:48.058 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:48.058 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:00:48.058 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:48.058 + cd /var/jenkins/workspace/dsa-phy-autotest 00:00:48.058 + source /etc/os-release 00:00:48.058 ++ NAME='Fedora Linux' 00:00:48.058 ++ VERSION='38 (Cloud Edition)' 00:00:48.058 ++ ID=fedora 00:00:48.058 ++ VERSION_ID=38 00:00:48.058 ++ VERSION_CODENAME= 00:00:48.058 ++ PLATFORM_ID=platform:f38 00:00:48.058 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:48.058 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:48.058 ++ LOGO=fedora-logo-icon 00:00:48.058 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:48.058 ++ HOME_URL=https://fedoraproject.org/ 00:00:48.058 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:48.058 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:48.058 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:48.058 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:48.058 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:48.058 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:48.058 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:48.058 ++ SUPPORT_END=2024-05-14 00:00:48.058 ++ VARIANT='Cloud Edition' 00:00:48.058 ++ VARIANT_ID=cloud 00:00:48.058 + uname -a 00:00:48.058 Linux spdk-fcp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:48.058 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:00:49.960 Hugepages 00:00:49.960 node hugesize free / total 00:00:49.960 node0 1048576kB 0 / 0 00:00:49.960 node0 2048kB 0 / 0 00:00:49.960 node1 1048576kB 0 / 0 00:00:49.960 node1 2048kB 0 / 0 00:00:49.960 00:00:49.960 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:49.960 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:00:49.960 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:00:49.960 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:00:49.960 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:00:49.960 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:00:49.960 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:00:49.960 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:00:49.960 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:00:49.960 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:00:50.220 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:00:50.220 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:00:50.221 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:00:50.221 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:00:50.221 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:00:50.221 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:00:50.221 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:00:50.221 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:00:50.221 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:00:50.221 + rm -f /tmp/spdk-ld-path 00:00:50.221 + source autorun-spdk.conf 00:00:50.221 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.221 ++ SPDK_TEST_ACCEL_DSA=1 00:00:50.221 ++ SPDK_TEST_ACCEL_IAA=1 00:00:50.221 ++ SPDK_TEST_NVMF=1 00:00:50.221 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.221 ++ SPDK_RUN_ASAN=1 00:00:50.221 ++ SPDK_RUN_UBSAN=1 00:00:50.221 ++ RUN_NIGHTLY=1 00:00:50.221 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.221 + [[ -n '' ]] 00:00:50.221 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:50.221 + for M in /var/spdk/build-*-manifest.txt 00:00:50.221 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.221 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:50.221 + for M in /var/spdk/build-*-manifest.txt 00:00:50.221 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.221 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:50.221 ++ uname 00:00:50.221 + [[ Linux == \L\i\n\u\x ]] 00:00:50.221 + sudo dmesg -T 00:00:50.221 + sudo dmesg --clear 00:00:50.221 + dmesg_pid=1101605 00:00:50.221 + [[ Fedora Linux == FreeBSD ]] 00:00:50.221 + sudo dmesg -Tw 00:00:50.221 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.221 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.221 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.221 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.221 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.221 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.221 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.221 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.221 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.221 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.221 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.221 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.221 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.221 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.221 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:50.221 Test configuration: 00:00:50.221 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.221 SPDK_TEST_ACCEL_DSA=1 00:00:50.221 SPDK_TEST_ACCEL_IAA=1 00:00:50.221 SPDK_TEST_NVMF=1 00:00:50.221 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.221 SPDK_RUN_ASAN=1 00:00:50.221 SPDK_RUN_UBSAN=1 00:00:50.221 RUN_NIGHTLY=1 21:01:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:00:50.221 21:01:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.221 21:01:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.221 21:01:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.221 21:01:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.221 21:01:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.221 21:01:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.221 21:01:44 -- paths/export.sh@5 -- $ export PATH 00:00:50.221 21:01:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.221 21:01:44 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:00:50.221 21:01:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:50.221 21:01:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713898904.XXXXXX 00:00:50.221 21:01:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713898904.4BTV7i 00:00:50.221 21:01:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:50.221 21:01:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:50.221 21:01:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:00:50.221 21:01:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.221 21:01:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.221 21:01:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:50.221 21:01:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:50.221 21:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.482 21:01:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:50.483 21:01:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:50.483 21:01:44 -- pm/common@17 -- $ local monitor 00:00:50.483 21:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.483 21:01:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1101639 00:00:50.483 21:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.483 21:01:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1101640 00:00:50.483 21:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.483 21:01:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1101642 00:00:50.483 21:01:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.483 21:01:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1101644 00:00:50.483 21:01:44 -- pm/common@26 -- $ sleep 1 00:00:50.483 21:01:44 -- pm/common@21 -- $ date +%s 00:00:50.483 21:01:44 -- pm/common@21 -- $ date +%s 00:00:50.483 21:01:44 -- pm/common@21 -- $ date +%s 00:00:50.483 21:01:44 -- pm/common@21 -- $ date +%s 00:00:50.483 21:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713898904 00:00:50.483 21:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713898904 00:00:50.483 21:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713898904 00:00:50.483 21:01:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713898904 00:00:50.483 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713898904_collect-vmstat.pm.log 00:00:50.483 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713898904_collect-bmc-pm.bmc.pm.log 00:00:50.483 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713898904_collect-cpu-temp.pm.log 00:00:50.483 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713898904_collect-cpu-load.pm.log 00:00:51.422 21:01:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:51.422 21:01:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.422 21:01:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.422 21:01:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:51.422 21:01:45 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.422 Tue Apr 23 07:01:45 PM UTC 2024 00:00:51.422 21:01:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.422 v24.05-pre-437-g3f2c8979187 00:00:51.422 21:01:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:51.422 21:01:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:51.422 21:01:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:51.422 21:01:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.422 21:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.422 ************************************ 00:00:51.422 START TEST asan 00:00:51.422 ************************************ 00:00:51.422 21:01:45 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:00:51.422 using asan 00:00:51.422 00:00:51.422 real 0m0.000s 00:00:51.422 user 0m0.000s 00:00:51.422 sys 0m0.000s 00:00:51.422 21:01:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:51.422 21:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.422 ************************************ 00:00:51.422 END TEST asan 00:00:51.422 ************************************ 00:00:51.422 21:01:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.422 21:01:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.422 21:01:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:51.422 21:01:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.422 21:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.682 ************************************ 00:00:51.682 START TEST ubsan 00:00:51.682 ************************************ 00:00:51.682 21:01:45 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:51.682 using ubsan 00:00:51.682 00:00:51.682 real 0m0.000s 00:00:51.682 user 0m0.000s 00:00:51.682 sys 0m0.000s 00:00:51.682 21:01:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:51.682 21:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.682 ************************************ 00:00:51.682 END TEST ubsan 00:00:51.682 ************************************ 00:00:51.682 21:01:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:51.682 21:01:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:51.682 21:01:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:51.682 21:01:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:51.682 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:00:51.682 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:00:51.943 Using 'verbs' RDMA provider 00:01:05.128 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.141 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.141 Creating mk/config.mk...done. 00:01:15.141 Creating mk/cc.flags.mk...done. 00:01:15.141 Type 'make' to build. 00:01:15.141 21:02:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:15.141 21:02:08 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:15.141 21:02:08 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:15.141 21:02:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.141 ************************************ 00:01:15.141 START TEST make 00:01:15.141 ************************************ 00:01:15.141 21:02:08 -- common/autotest_common.sh@1111 -- $ make -j128 00:01:15.141 make[1]: Nothing to be done for 'all'. 00:01:21.707 The Meson build system 00:01:21.707 Version: 1.3.1 00:01:21.707 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:21.707 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:21.707 Build type: native build 00:01:21.707 Program cat found: YES (/usr/bin/cat) 00:01:21.707 Project name: DPDK 00:01:21.707 Project version: 23.11.0 00:01:21.707 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:21.707 C linker for the host machine: cc ld.bfd 2.39-16 00:01:21.707 Host machine cpu family: x86_64 00:01:21.707 Host machine cpu: x86_64 00:01:21.707 Message: ## Building in Developer Mode ## 00:01:21.707 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:21.707 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:21.707 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:21.707 Program python3 found: YES (/usr/bin/python3) 00:01:21.707 Program cat found: YES (/usr/bin/cat) 00:01:21.707 Compiler for C supports arguments -march=native: YES 00:01:21.707 Checking for size of "void *" : 8 00:01:21.707 Checking for size of "void *" : 8 (cached) 00:01:21.707 Library m found: YES 00:01:21.707 Library numa found: YES 00:01:21.707 Has header "numaif.h" : YES 00:01:21.707 Library fdt found: NO 00:01:21.707 Library execinfo found: NO 00:01:21.707 Has header "execinfo.h" : YES 00:01:21.707 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:21.707 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:21.707 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:21.707 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:21.707 Run-time dependency openssl found: YES 3.0.9 00:01:21.707 Run-time dependency libpcap found: YES 1.10.4 00:01:21.707 Has header "pcap.h" with dependency libpcap: YES 00:01:21.707 Compiler for C supports arguments -Wcast-qual: YES 00:01:21.707 Compiler for C supports arguments -Wdeprecated: YES 00:01:21.707 Compiler for C supports arguments -Wformat: YES 00:01:21.707 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:21.707 Compiler for C supports arguments -Wformat-security: NO 00:01:21.707 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.707 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:21.707 Compiler for C supports arguments -Wnested-externs: YES 00:01:21.707 Compiler for C supports arguments -Wold-style-definition: YES 00:01:21.707 Compiler for C supports arguments -Wpointer-arith: YES 00:01:21.707 Compiler for C supports arguments -Wsign-compare: YES 00:01:21.707 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:21.707 Compiler for C supports arguments -Wundef: YES 00:01:21.707 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.707 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:21.707 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:21.707 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.707 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:21.707 Program objdump found: YES (/usr/bin/objdump) 00:01:21.707 Compiler for C supports arguments -mavx512f: YES 00:01:21.707 Checking if "AVX512 checking" compiles: YES 00:01:21.707 Fetching value of define "__SSE4_2__" : 1 00:01:21.707 Fetching value of define "__AES__" : 1 00:01:21.707 Fetching value of define "__AVX__" : 1 00:01:21.707 Fetching value of define "__AVX2__" : 1 00:01:21.707 Fetching value of define "__AVX512BW__" : 1 00:01:21.707 Fetching value of define "__AVX512CD__" : 1 00:01:21.707 Fetching value of define "__AVX512DQ__" : 1 00:01:21.707 Fetching value of define "__AVX512F__" : 1 00:01:21.707 Fetching value of define "__AVX512VL__" : 1 00:01:21.707 Fetching value of define "__PCLMUL__" : 1 00:01:21.707 Fetching value of define "__RDRND__" : 1 00:01:21.707 Fetching value of define "__RDSEED__" : 1 00:01:21.707 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:21.707 Fetching value of define "__znver1__" : (undefined) 00:01:21.707 Fetching value of define "__znver2__" : (undefined) 00:01:21.707 Fetching value of define "__znver3__" : (undefined) 00:01:21.707 Fetching value of define "__znver4__" : (undefined) 00:01:21.707 Library asan found: YES 00:01:21.707 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:21.707 Message: lib/log: Defining dependency "log" 00:01:21.707 Message: lib/kvargs: Defining dependency "kvargs" 00:01:21.707 Message: lib/telemetry: Defining dependency "telemetry" 00:01:21.707 Library rt found: YES 00:01:21.707 Checking for function "getentropy" : NO 00:01:21.707 Message: lib/eal: Defining dependency "eal" 00:01:21.707 Message: lib/ring: Defining dependency "ring" 00:01:21.707 Message: lib/rcu: Defining dependency "rcu" 00:01:21.707 Message: lib/mempool: Defining dependency "mempool" 00:01:21.707 Message: lib/mbuf: Defining dependency "mbuf" 00:01:21.707 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:21.707 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:21.707 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:21.707 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:21.708 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:21.708 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:21.708 Compiler for C supports arguments -mpclmul: YES 00:01:21.708 Compiler for C supports arguments -maes: YES 00:01:21.708 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:21.708 Compiler for C supports arguments -mavx512bw: YES 00:01:21.708 Compiler for C supports arguments -mavx512dq: YES 00:01:21.708 Compiler for C supports arguments -mavx512vl: YES 00:01:21.708 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:21.708 Compiler for C supports arguments -mavx2: YES 00:01:21.708 Compiler for C supports arguments -mavx: YES 00:01:21.708 Message: lib/net: Defining dependency "net" 00:01:21.708 Message: lib/meter: Defining dependency "meter" 00:01:21.708 Message: lib/ethdev: Defining dependency "ethdev" 00:01:21.708 Message: lib/pci: Defining dependency "pci" 00:01:21.708 Message: lib/cmdline: Defining dependency "cmdline" 00:01:21.708 Message: lib/hash: Defining dependency "hash" 00:01:21.708 Message: lib/timer: Defining dependency "timer" 00:01:21.708 Message: lib/compressdev: Defining dependency "compressdev" 00:01:21.708 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:21.708 Message: lib/dmadev: Defining dependency "dmadev" 00:01:21.708 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:21.708 Message: lib/power: Defining dependency "power" 00:01:21.708 Message: lib/reorder: Defining dependency "reorder" 00:01:21.708 Message: lib/security: Defining dependency "security" 00:01:21.708 Has header "linux/userfaultfd.h" : YES 00:01:21.708 Has header "linux/vduse.h" : YES 00:01:21.708 Message: lib/vhost: Defining dependency "vhost" 00:01:21.708 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:21.708 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:21.708 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:21.708 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:21.708 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:21.708 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:21.708 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:21.708 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:21.708 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:21.708 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:21.708 Program doxygen found: YES (/usr/bin/doxygen) 00:01:21.708 Configuring doxy-api-html.conf using configuration 00:01:21.708 Configuring doxy-api-man.conf using configuration 00:01:21.708 Program mandb found: YES (/usr/bin/mandb) 00:01:21.708 Program sphinx-build found: NO 00:01:21.708 Configuring rte_build_config.h using configuration 00:01:21.708 Message: 00:01:21.708 ================= 00:01:21.708 Applications Enabled 00:01:21.708 ================= 00:01:21.708 00:01:21.708 apps: 00:01:21.708 00:01:21.708 00:01:21.708 Message: 00:01:21.708 ================= 00:01:21.708 Libraries Enabled 00:01:21.708 ================= 00:01:21.708 00:01:21.708 libs: 00:01:21.708 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:21.708 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:21.708 cryptodev, dmadev, power, reorder, security, vhost, 00:01:21.708 00:01:21.708 Message: 00:01:21.708 =============== 00:01:21.708 Drivers Enabled 00:01:21.708 =============== 00:01:21.708 00:01:21.708 common: 00:01:21.708 00:01:21.708 bus: 00:01:21.708 pci, vdev, 00:01:21.708 mempool: 00:01:21.708 ring, 00:01:21.708 dma: 00:01:21.708 00:01:21.708 net: 00:01:21.708 00:01:21.708 crypto: 00:01:21.708 00:01:21.708 compress: 00:01:21.708 00:01:21.708 vdpa: 00:01:21.708 00:01:21.708 00:01:21.708 Message: 00:01:21.708 ================= 00:01:21.708 Content Skipped 00:01:21.708 ================= 00:01:21.708 00:01:21.708 apps: 00:01:21.708 dumpcap: explicitly disabled via build config 00:01:21.708 graph: explicitly disabled via build config 00:01:21.708 pdump: explicitly disabled via build config 00:01:21.708 proc-info: explicitly disabled via build config 00:01:21.708 test-acl: explicitly disabled via build config 00:01:21.708 test-bbdev: explicitly disabled via build config 00:01:21.708 test-cmdline: explicitly disabled via build config 00:01:21.708 test-compress-perf: explicitly disabled via build config 00:01:21.708 test-crypto-perf: explicitly disabled via build config 00:01:21.708 test-dma-perf: explicitly disabled via build config 00:01:21.708 test-eventdev: explicitly disabled via build config 00:01:21.708 test-fib: explicitly disabled via build config 00:01:21.708 test-flow-perf: explicitly disabled via build config 00:01:21.708 test-gpudev: explicitly disabled via build config 00:01:21.708 test-mldev: explicitly disabled via build config 00:01:21.708 test-pipeline: explicitly disabled via build config 00:01:21.708 test-pmd: explicitly disabled via build config 00:01:21.708 test-regex: explicitly disabled via build config 00:01:21.708 test-sad: explicitly disabled via build config 00:01:21.708 test-security-perf: explicitly disabled via build config 00:01:21.708 00:01:21.708 libs: 00:01:21.708 metrics: explicitly disabled via build config 00:01:21.708 acl: explicitly disabled via build config 00:01:21.708 bbdev: explicitly disabled via build config 00:01:21.708 bitratestats: explicitly disabled via build config 00:01:21.708 bpf: explicitly disabled via build config 00:01:21.708 cfgfile: explicitly disabled via build config 00:01:21.708 distributor: explicitly disabled via build config 00:01:21.708 efd: explicitly disabled via build config 00:01:21.708 eventdev: explicitly disabled via build config 00:01:21.708 dispatcher: explicitly disabled via build config 00:01:21.708 gpudev: explicitly disabled via build config 00:01:21.708 gro: explicitly disabled via build config 00:01:21.708 gso: explicitly disabled via build config 00:01:21.708 ip_frag: explicitly disabled via build config 00:01:21.708 jobstats: explicitly disabled via build config 00:01:21.708 latencystats: explicitly disabled via build config 00:01:21.708 lpm: explicitly disabled via build config 00:01:21.708 member: explicitly disabled via build config 00:01:21.708 pcapng: explicitly disabled via build config 00:01:21.708 rawdev: explicitly disabled via build config 00:01:21.708 regexdev: explicitly disabled via build config 00:01:21.708 mldev: explicitly disabled via build config 00:01:21.708 rib: explicitly disabled via build config 00:01:21.708 sched: explicitly disabled via build config 00:01:21.708 stack: explicitly disabled via build config 00:01:21.708 ipsec: explicitly disabled via build config 00:01:21.708 pdcp: explicitly disabled via build config 00:01:21.708 fib: explicitly disabled via build config 00:01:21.708 port: explicitly disabled via build config 00:01:21.708 pdump: explicitly disabled via build config 00:01:21.708 table: explicitly disabled via build config 00:01:21.708 pipeline: explicitly disabled via build config 00:01:21.708 graph: explicitly disabled via build config 00:01:21.708 node: explicitly disabled via build config 00:01:21.708 00:01:21.708 drivers: 00:01:21.708 common/cpt: not in enabled drivers build config 00:01:21.708 common/dpaax: not in enabled drivers build config 00:01:21.708 common/iavf: not in enabled drivers build config 00:01:21.708 common/idpf: not in enabled drivers build config 00:01:21.708 common/mvep: not in enabled drivers build config 00:01:21.708 common/octeontx: not in enabled drivers build config 00:01:21.708 bus/auxiliary: not in enabled drivers build config 00:01:21.708 bus/cdx: not in enabled drivers build config 00:01:21.708 bus/dpaa: not in enabled drivers build config 00:01:21.708 bus/fslmc: not in enabled drivers build config 00:01:21.708 bus/ifpga: not in enabled drivers build config 00:01:21.708 bus/platform: not in enabled drivers build config 00:01:21.708 bus/vmbus: not in enabled drivers build config 00:01:21.708 common/cnxk: not in enabled drivers build config 00:01:21.708 common/mlx5: not in enabled drivers build config 00:01:21.708 common/nfp: not in enabled drivers build config 00:01:21.708 common/qat: not in enabled drivers build config 00:01:21.708 common/sfc_efx: not in enabled drivers build config 00:01:21.708 mempool/bucket: not in enabled drivers build config 00:01:21.708 mempool/cnxk: not in enabled drivers build config 00:01:21.708 mempool/dpaa: not in enabled drivers build config 00:01:21.708 mempool/dpaa2: not in enabled drivers build config 00:01:21.708 mempool/octeontx: not in enabled drivers build config 00:01:21.708 mempool/stack: not in enabled drivers build config 00:01:21.708 dma/cnxk: not in enabled drivers build config 00:01:21.708 dma/dpaa: not in enabled drivers build config 00:01:21.708 dma/dpaa2: not in enabled drivers build config 00:01:21.708 dma/hisilicon: not in enabled drivers build config 00:01:21.708 dma/idxd: not in enabled drivers build config 00:01:21.708 dma/ioat: not in enabled drivers build config 00:01:21.708 dma/skeleton: not in enabled drivers build config 00:01:21.708 net/af_packet: not in enabled drivers build config 00:01:21.708 net/af_xdp: not in enabled drivers build config 00:01:21.708 net/ark: not in enabled drivers build config 00:01:21.708 net/atlantic: not in enabled drivers build config 00:01:21.708 net/avp: not in enabled drivers build config 00:01:21.708 net/axgbe: not in enabled drivers build config 00:01:21.708 net/bnx2x: not in enabled drivers build config 00:01:21.708 net/bnxt: not in enabled drivers build config 00:01:21.708 net/bonding: not in enabled drivers build config 00:01:21.708 net/cnxk: not in enabled drivers build config 00:01:21.708 net/cpfl: not in enabled drivers build config 00:01:21.708 net/cxgbe: not in enabled drivers build config 00:01:21.708 net/dpaa: not in enabled drivers build config 00:01:21.708 net/dpaa2: not in enabled drivers build config 00:01:21.708 net/e1000: not in enabled drivers build config 00:01:21.708 net/ena: not in enabled drivers build config 00:01:21.708 net/enetc: not in enabled drivers build config 00:01:21.708 net/enetfec: not in enabled drivers build config 00:01:21.708 net/enic: not in enabled drivers build config 00:01:21.708 net/failsafe: not in enabled drivers build config 00:01:21.708 net/fm10k: not in enabled drivers build config 00:01:21.708 net/gve: not in enabled drivers build config 00:01:21.708 net/hinic: not in enabled drivers build config 00:01:21.708 net/hns3: not in enabled drivers build config 00:01:21.708 net/i40e: not in enabled drivers build config 00:01:21.708 net/iavf: not in enabled drivers build config 00:01:21.708 net/ice: not in enabled drivers build config 00:01:21.708 net/idpf: not in enabled drivers build config 00:01:21.709 net/igc: not in enabled drivers build config 00:01:21.709 net/ionic: not in enabled drivers build config 00:01:21.709 net/ipn3ke: not in enabled drivers build config 00:01:21.709 net/ixgbe: not in enabled drivers build config 00:01:21.709 net/mana: not in enabled drivers build config 00:01:21.709 net/memif: not in enabled drivers build config 00:01:21.709 net/mlx4: not in enabled drivers build config 00:01:21.709 net/mlx5: not in enabled drivers build config 00:01:21.709 net/mvneta: not in enabled drivers build config 00:01:21.709 net/mvpp2: not in enabled drivers build config 00:01:21.709 net/netvsc: not in enabled drivers build config 00:01:21.709 net/nfb: not in enabled drivers build config 00:01:21.709 net/nfp: not in enabled drivers build config 00:01:21.709 net/ngbe: not in enabled drivers build config 00:01:21.709 net/null: not in enabled drivers build config 00:01:21.709 net/octeontx: not in enabled drivers build config 00:01:21.709 net/octeon_ep: not in enabled drivers build config 00:01:21.709 net/pcap: not in enabled drivers build config 00:01:21.709 net/pfe: not in enabled drivers build config 00:01:21.709 net/qede: not in enabled drivers build config 00:01:21.709 net/ring: not in enabled drivers build config 00:01:21.709 net/sfc: not in enabled drivers build config 00:01:21.709 net/softnic: not in enabled drivers build config 00:01:21.709 net/tap: not in enabled drivers build config 00:01:21.709 net/thunderx: not in enabled drivers build config 00:01:21.709 net/txgbe: not in enabled drivers build config 00:01:21.709 net/vdev_netvsc: not in enabled drivers build config 00:01:21.709 net/vhost: not in enabled drivers build config 00:01:21.709 net/virtio: not in enabled drivers build config 00:01:21.709 net/vmxnet3: not in enabled drivers build config 00:01:21.709 raw/*: missing internal dependency, "rawdev" 00:01:21.709 crypto/armv8: not in enabled drivers build config 00:01:21.709 crypto/bcmfs: not in enabled drivers build config 00:01:21.709 crypto/caam_jr: not in enabled drivers build config 00:01:21.709 crypto/ccp: not in enabled drivers build config 00:01:21.709 crypto/cnxk: not in enabled drivers build config 00:01:21.709 crypto/dpaa_sec: not in enabled drivers build config 00:01:21.709 crypto/dpaa2_sec: not in enabled drivers build config 00:01:21.709 crypto/ipsec_mb: not in enabled drivers build config 00:01:21.709 crypto/mlx5: not in enabled drivers build config 00:01:21.709 crypto/mvsam: not in enabled drivers build config 00:01:21.709 crypto/nitrox: not in enabled drivers build config 00:01:21.709 crypto/null: not in enabled drivers build config 00:01:21.709 crypto/octeontx: not in enabled drivers build config 00:01:21.709 crypto/openssl: not in enabled drivers build config 00:01:21.709 crypto/scheduler: not in enabled drivers build config 00:01:21.709 crypto/uadk: not in enabled drivers build config 00:01:21.709 crypto/virtio: not in enabled drivers build config 00:01:21.709 compress/isal: not in enabled drivers build config 00:01:21.709 compress/mlx5: not in enabled drivers build config 00:01:21.709 compress/octeontx: not in enabled drivers build config 00:01:21.709 compress/zlib: not in enabled drivers build config 00:01:21.709 regex/*: missing internal dependency, "regexdev" 00:01:21.709 ml/*: missing internal dependency, "mldev" 00:01:21.709 vdpa/ifc: not in enabled drivers build config 00:01:21.709 vdpa/mlx5: not in enabled drivers build config 00:01:21.709 vdpa/nfp: not in enabled drivers build config 00:01:21.709 vdpa/sfc: not in enabled drivers build config 00:01:21.709 event/*: missing internal dependency, "eventdev" 00:01:21.709 baseband/*: missing internal dependency, "bbdev" 00:01:21.709 gpu/*: missing internal dependency, "gpudev" 00:01:21.709 00:01:21.709 00:01:21.709 Build targets in project: 84 00:01:21.709 00:01:21.709 DPDK 23.11.0 00:01:21.709 00:01:21.709 User defined options 00:01:21.709 buildtype : debug 00:01:21.709 default_library : shared 00:01:21.709 libdir : lib 00:01:21.709 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:21.709 b_sanitize : address 00:01:21.709 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:21.709 c_link_args : 00:01:21.709 cpu_instruction_set: native 00:01:21.709 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:21.709 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:21.709 enable_docs : false 00:01:21.709 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:21.709 enable_kmods : false 00:01:21.709 tests : false 00:01:21.709 00:01:21.709 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.709 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:21.709 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:21.709 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:21.709 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:21.709 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:21.709 [5/264] Linking static target lib/librte_kvargs.a 00:01:21.709 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:21.709 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:21.709 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:21.709 [9/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:21.709 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:21.709 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:21.709 [12/264] Linking static target lib/librte_log.a 00:01:21.709 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:21.709 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:21.709 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:21.709 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:21.709 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:21.709 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:21.709 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:21.709 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:21.709 [21/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:21.709 [22/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:21.709 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:21.709 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:21.709 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:21.709 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:21.709 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:21.709 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:21.709 [29/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.709 [30/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:21.709 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:21.709 [32/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:21.709 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:21.709 [34/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:21.709 [35/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:21.709 [36/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:21.709 [37/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:21.709 [38/264] Linking static target lib/librte_pci.a 00:01:21.709 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:21.709 [40/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:21.709 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:21.971 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:21.971 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:21.971 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:21.971 [45/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:21.971 [46/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:21.971 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:21.971 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:21.971 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:21.971 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:21.971 [51/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:21.971 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:21.971 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:21.971 [54/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:21.971 [55/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:21.971 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:21.971 [57/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:21.971 [58/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:21.971 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:21.971 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:21.971 [61/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:21.971 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:21.971 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:21.971 [64/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.971 [65/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:21.971 [66/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:21.971 [67/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:21.971 [68/264] Linking static target lib/librte_telemetry.a 00:01:21.971 [69/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:21.971 [70/264] Linking static target lib/librte_ring.a 00:01:21.971 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:21.971 [72/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:21.972 [73/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:21.972 [74/264] Linking static target lib/librte_meter.a 00:01:21.972 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:21.972 [76/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:21.972 [77/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:21.972 [78/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:21.972 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:21.972 [80/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:21.972 [81/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:21.972 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:21.972 [83/264] Linking static target lib/librte_timer.a 00:01:21.972 [84/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:21.972 [85/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.972 [86/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:21.972 [87/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:21.972 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:21.972 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:21.972 [90/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:21.972 [91/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:21.972 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:21.972 [93/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:21.972 [94/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:21.972 [95/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:21.972 [96/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:21.972 [97/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:21.972 [98/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:21.972 [99/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:21.972 [100/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:21.972 [101/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:21.972 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:21.972 [103/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.972 [104/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:21.972 [105/264] Linking static target lib/librte_dmadev.a 00:01:21.972 [106/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:21.972 [107/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:21.972 [108/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:21.972 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:21.972 [110/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:21.972 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:21.972 [112/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:21.972 [113/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:21.972 [114/264] Linking target lib/librte_log.so.24.0 00:01:21.972 [115/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:22.231 [116/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:22.231 [117/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [118/264] Linking static target lib/librte_cmdline.a 00:01:22.231 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:22.231 [120/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:22.231 [121/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:22.231 [122/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:22.231 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:22.231 [124/264] Linking static target lib/librte_net.a 00:01:22.231 [125/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:22.231 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:22.231 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.231 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.231 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:22.231 [130/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [131/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:22.231 [132/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.231 [133/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.231 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:22.231 [135/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:22.231 [136/264] Linking static target lib/librte_mempool.a 00:01:22.231 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:22.231 [138/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:22.231 [139/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.231 [140/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.231 [141/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.231 [142/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:22.231 [143/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:22.231 [144/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [145/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:22.231 [146/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:22.231 [148/264] Linking target lib/librte_kvargs.so.24.0 00:01:22.231 [149/264] Linking target lib/librte_telemetry.so.24.0 00:01:22.231 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:22.231 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.231 [152/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:22.231 [153/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:22.231 [154/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:22.231 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.231 [156/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:22.231 [157/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.231 [158/264] Linking static target lib/librte_compressdev.a 00:01:22.231 [159/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.231 [160/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:22.231 [161/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.231 [162/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.231 [163/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:22.231 [164/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:22.231 [165/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:22.231 [166/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.231 [167/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.231 [168/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.231 [169/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.231 [170/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.231 [171/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:22.231 [172/264] Linking static target lib/librte_rcu.a 00:01:22.231 [173/264] Linking static target lib/librte_security.a 00:01:22.231 [174/264] Linking static target lib/librte_reorder.a 00:01:22.231 [175/264] Linking static target lib/librte_eal.a 00:01:22.231 [176/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:22.231 [178/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.231 [179/264] Linking static target lib/librte_power.a 00:01:22.231 [180/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.231 [181/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.231 [182/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:22.231 [183/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.231 [184/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [185/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:22.231 [186/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.231 [187/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.231 [188/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.231 [189/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.231 [190/264] Linking static target drivers/librte_bus_vdev.a 00:01:22.489 [191/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.489 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.489 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.489 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.489 [195/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.489 [196/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.489 [197/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.489 [198/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.489 [199/264] Linking static target lib/librte_hash.a 00:01:22.489 [200/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.489 [201/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.489 [202/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.489 [203/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.489 [204/264] Linking static target drivers/librte_bus_pci.a 00:01:22.489 [205/264] Linking static target drivers/librte_mempool_ring.a 00:01:22.489 [206/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.489 [207/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.489 [208/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.489 [209/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.489 [210/264] Linking static target lib/librte_mbuf.a 00:01:22.747 [211/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.747 [212/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.747 [213/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.747 [214/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:22.747 [215/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.005 [216/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.005 [217/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.005 [218/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.005 [219/264] Linking static target lib/librte_cryptodev.a 00:01:23.005 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.572 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:23.572 [222/264] Linking static target lib/librte_ethdev.a 00:01:23.830 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:24.394 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.295 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.295 [226/264] Linking static target lib/librte_vhost.a 00:01:27.272 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.720 [228/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.720 [229/264] Linking target lib/librte_eal.so.24.0 00:01:28.720 [230/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.720 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:28.720 [232/264] Linking target lib/librte_ring.so.24.0 00:01:28.721 [233/264] Linking target lib/librte_meter.so.24.0 00:01:28.721 [234/264] Linking target lib/librte_timer.so.24.0 00:01:28.721 [235/264] Linking target lib/librte_pci.so.24.0 00:01:28.721 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:28.721 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:28.721 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:28.721 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:28.721 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:28.982 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:28.982 [242/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:28.982 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:28.982 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:28.982 [245/264] Linking target lib/librte_rcu.so.24.0 00:01:28.982 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:28.982 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:28.982 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:28.982 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:29.240 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:29.240 [251/264] Linking target lib/librte_net.so.24.0 00:01:29.240 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:01:29.240 [253/264] Linking target lib/librte_reorder.so.24.0 00:01:29.240 [254/264] Linking target lib/librte_compressdev.so.24.0 00:01:29.240 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:29.240 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:29.240 [257/264] Linking target lib/librte_hash.so.24.0 00:01:29.240 [258/264] Linking target lib/librte_security.so.24.0 00:01:29.240 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:29.240 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:29.240 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:29.498 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:29.498 [263/264] Linking target lib/librte_vhost.so.24.0 00:01:29.498 [264/264] Linking target lib/librte_power.so.24.0 00:01:29.498 INFO: autodetecting backend as ninja 00:01:29.498 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:30.065 CC lib/log/log.o 00:01:30.065 CC lib/log/log_flags.o 00:01:30.065 CC lib/log/log_deprecated.o 00:01:30.065 CC lib/ut_mock/mock.o 00:01:30.065 CC lib/ut/ut.o 00:01:30.323 LIB libspdk_ut_mock.a 00:01:30.323 LIB libspdk_log.a 00:01:30.323 SO libspdk_ut_mock.so.6.0 00:01:30.323 SO libspdk_log.so.7.0 00:01:30.323 SYMLINK libspdk_ut_mock.so 00:01:30.323 SYMLINK libspdk_log.so 00:01:30.323 LIB libspdk_ut.a 00:01:30.323 SO libspdk_ut.so.2.0 00:01:30.323 SYMLINK libspdk_ut.so 00:01:30.581 CC lib/ioat/ioat.o 00:01:30.581 CC lib/dma/dma.o 00:01:30.581 CXX lib/trace_parser/trace.o 00:01:30.581 CC lib/util/base64.o 00:01:30.581 CC lib/util/bit_array.o 00:01:30.581 CC lib/util/cpuset.o 00:01:30.581 CC lib/util/crc16.o 00:01:30.581 CC lib/util/crc32.o 00:01:30.581 CC lib/util/crc32c.o 00:01:30.581 CC lib/util/crc32_ieee.o 00:01:30.581 CC lib/util/crc64.o 00:01:30.581 CC lib/util/fd.o 00:01:30.581 CC lib/util/dif.o 00:01:30.581 CC lib/util/iov.o 00:01:30.581 CC lib/util/file.o 00:01:30.581 CC lib/util/pipe.o 00:01:30.581 CC lib/util/hexlify.o 00:01:30.581 CC lib/util/math.o 00:01:30.581 CC lib/util/uuid.o 00:01:30.581 CC lib/util/strerror_tls.o 00:01:30.581 CC lib/util/string.o 00:01:30.581 CC lib/util/xor.o 00:01:30.581 CC lib/util/fd_group.o 00:01:30.581 CC lib/util/zipf.o 00:01:30.581 CC lib/vfio_user/host/vfio_user_pci.o 00:01:30.581 CC lib/vfio_user/host/vfio_user.o 00:01:30.842 LIB libspdk_dma.a 00:01:30.842 LIB libspdk_ioat.a 00:01:30.842 SO libspdk_dma.so.4.0 00:01:30.842 SO libspdk_ioat.so.7.0 00:01:30.842 SYMLINK libspdk_dma.so 00:01:30.842 SYMLINK libspdk_ioat.so 00:01:30.842 LIB libspdk_vfio_user.a 00:01:30.842 SO libspdk_vfio_user.so.5.0 00:01:30.842 SYMLINK libspdk_vfio_user.so 00:01:31.102 LIB libspdk_trace_parser.a 00:01:31.102 SO libspdk_trace_parser.so.5.0 00:01:31.102 LIB libspdk_util.a 00:01:31.360 SYMLINK libspdk_trace_parser.so 00:01:31.360 SO libspdk_util.so.9.0 00:01:31.360 SYMLINK libspdk_util.so 00:01:31.619 CC lib/vmd/led.o 00:01:31.619 CC lib/vmd/vmd.o 00:01:31.619 CC lib/env_dpdk/env.o 00:01:31.619 CC lib/env_dpdk/memory.o 00:01:31.619 CC lib/env_dpdk/threads.o 00:01:31.619 CC lib/env_dpdk/pci.o 00:01:31.619 CC lib/env_dpdk/pci_ioat.o 00:01:31.619 CC lib/env_dpdk/init.o 00:01:31.619 CC lib/env_dpdk/pci_virtio.o 00:01:31.619 CC lib/env_dpdk/pci_vmd.o 00:01:31.619 CC lib/conf/conf.o 00:01:31.619 CC lib/env_dpdk/pci_idxd.o 00:01:31.619 CC lib/env_dpdk/sigbus_handler.o 00:01:31.619 CC lib/env_dpdk/pci_event.o 00:01:31.619 CC lib/env_dpdk/pci_dpdk.o 00:01:31.619 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:31.619 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:31.619 CC lib/rdma/rdma_verbs.o 00:01:31.619 CC lib/rdma/common.o 00:01:31.619 CC lib/json/json_parse.o 00:01:31.619 CC lib/json/json_util.o 00:01:31.619 CC lib/json/json_write.o 00:01:31.619 CC lib/idxd/idxd_user.o 00:01:31.619 CC lib/idxd/idxd.o 00:01:31.878 LIB libspdk_conf.a 00:01:31.878 SO libspdk_conf.so.6.0 00:01:31.878 LIB libspdk_rdma.a 00:01:31.878 SO libspdk_rdma.so.6.0 00:01:31.878 SYMLINK libspdk_conf.so 00:01:31.878 LIB libspdk_json.a 00:01:31.878 SO libspdk_json.so.6.0 00:01:31.878 SYMLINK libspdk_rdma.so 00:01:31.878 SYMLINK libspdk_json.so 00:01:32.137 LIB libspdk_vmd.a 00:01:32.137 SO libspdk_vmd.so.6.0 00:01:32.137 SYMLINK libspdk_vmd.so 00:01:32.137 CC lib/jsonrpc/jsonrpc_server.o 00:01:32.137 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:32.137 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:32.137 CC lib/jsonrpc/jsonrpc_client.o 00:01:32.137 LIB libspdk_idxd.a 00:01:32.137 SO libspdk_idxd.so.12.0 00:01:32.396 SYMLINK libspdk_idxd.so 00:01:32.396 LIB libspdk_jsonrpc.a 00:01:32.396 SO libspdk_jsonrpc.so.6.0 00:01:32.656 LIB libspdk_env_dpdk.a 00:01:32.656 SYMLINK libspdk_jsonrpc.so 00:01:32.656 SO libspdk_env_dpdk.so.14.0 00:01:32.656 SYMLINK libspdk_env_dpdk.so 00:01:32.656 CC lib/rpc/rpc.o 00:01:32.914 LIB libspdk_rpc.a 00:01:32.914 SO libspdk_rpc.so.6.0 00:01:32.914 SYMLINK libspdk_rpc.so 00:01:33.173 CC lib/notify/notify_rpc.o 00:01:33.173 CC lib/notify/notify.o 00:01:33.173 CC lib/trace/trace.o 00:01:33.173 CC lib/trace/trace_flags.o 00:01:33.173 CC lib/trace/trace_rpc.o 00:01:33.173 CC lib/keyring/keyring.o 00:01:33.173 CC lib/keyring/keyring_rpc.o 00:01:33.433 LIB libspdk_notify.a 00:01:33.433 SO libspdk_notify.so.6.0 00:01:33.433 LIB libspdk_keyring.a 00:01:33.433 SYMLINK libspdk_notify.so 00:01:33.433 SO libspdk_keyring.so.1.0 00:01:33.433 LIB libspdk_trace.a 00:01:33.433 SO libspdk_trace.so.10.0 00:01:33.433 SYMLINK libspdk_keyring.so 00:01:33.433 SYMLINK libspdk_trace.so 00:01:33.693 CC lib/thread/thread.o 00:01:33.693 CC lib/thread/iobuf.o 00:01:33.693 CC lib/sock/sock.o 00:01:33.693 CC lib/sock/sock_rpc.o 00:01:34.264 LIB libspdk_sock.a 00:01:34.264 SO libspdk_sock.so.9.0 00:01:34.264 SYMLINK libspdk_sock.so 00:01:34.522 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:34.522 CC lib/nvme/nvme_ctrlr.o 00:01:34.522 CC lib/nvme/nvme_fabric.o 00:01:34.522 CC lib/nvme/nvme_ns.o 00:01:34.522 CC lib/nvme/nvme_ns_cmd.o 00:01:34.522 CC lib/nvme/nvme_pcie_common.o 00:01:34.522 CC lib/nvme/nvme_pcie.o 00:01:34.522 CC lib/nvme/nvme.o 00:01:34.522 CC lib/nvme/nvme_qpair.o 00:01:34.522 CC lib/nvme/nvme_transport.o 00:01:34.522 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:34.522 CC lib/nvme/nvme_discovery.o 00:01:34.522 CC lib/nvme/nvme_quirks.o 00:01:34.522 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:34.522 CC lib/nvme/nvme_io_msg.o 00:01:34.522 CC lib/nvme/nvme_tcp.o 00:01:34.522 CC lib/nvme/nvme_opal.o 00:01:34.522 CC lib/nvme/nvme_poll_group.o 00:01:34.522 CC lib/nvme/nvme_zns.o 00:01:34.522 CC lib/nvme/nvme_cuse.o 00:01:34.522 CC lib/nvme/nvme_auth.o 00:01:34.522 CC lib/nvme/nvme_stubs.o 00:01:34.522 CC lib/nvme/nvme_rdma.o 00:01:35.458 LIB libspdk_thread.a 00:01:35.458 SO libspdk_thread.so.10.0 00:01:35.458 SYMLINK libspdk_thread.so 00:01:35.717 CC lib/accel/accel.o 00:01:35.717 CC lib/accel/accel_rpc.o 00:01:35.717 CC lib/accel/accel_sw.o 00:01:35.717 CC lib/virtio/virtio.o 00:01:35.717 CC lib/virtio/virtio_vfio_user.o 00:01:35.717 CC lib/virtio/virtio_vhost_user.o 00:01:35.717 CC lib/virtio/virtio_pci.o 00:01:35.717 CC lib/blob/request.o 00:01:35.717 CC lib/init/json_config.o 00:01:35.717 CC lib/blob/blobstore.o 00:01:35.717 CC lib/init/subsystem_rpc.o 00:01:35.717 CC lib/init/subsystem.o 00:01:35.717 CC lib/blob/blob_bs_dev.o 00:01:35.717 CC lib/blob/zeroes.o 00:01:35.717 CC lib/init/rpc.o 00:01:35.976 LIB libspdk_init.a 00:01:35.976 SO libspdk_init.so.5.0 00:01:35.976 SYMLINK libspdk_init.so 00:01:35.976 LIB libspdk_virtio.a 00:01:36.234 SO libspdk_virtio.so.7.0 00:01:36.234 SYMLINK libspdk_virtio.so 00:01:36.234 CC lib/event/app.o 00:01:36.234 CC lib/event/reactor.o 00:01:36.234 CC lib/event/app_rpc.o 00:01:36.234 CC lib/event/scheduler_static.o 00:01:36.234 CC lib/event/log_rpc.o 00:01:36.234 LIB libspdk_nvme.a 00:01:36.492 SO libspdk_nvme.so.13.0 00:01:36.753 SYMLINK libspdk_nvme.so 00:01:36.753 LIB libspdk_event.a 00:01:36.753 LIB libspdk_accel.a 00:01:36.753 SO libspdk_event.so.13.0 00:01:36.753 SO libspdk_accel.so.15.0 00:01:36.753 SYMLINK libspdk_event.so 00:01:37.011 SYMLINK libspdk_accel.so 00:01:37.011 CC lib/bdev/bdev.o 00:01:37.011 CC lib/bdev/bdev_rpc.o 00:01:37.011 CC lib/bdev/bdev_zone.o 00:01:37.011 CC lib/bdev/part.o 00:01:37.011 CC lib/bdev/scsi_nvme.o 00:01:38.389 LIB libspdk_blob.a 00:01:38.389 SO libspdk_blob.so.11.0 00:01:38.648 SYMLINK libspdk_blob.so 00:01:38.908 CC lib/blobfs/blobfs.o 00:01:38.908 CC lib/blobfs/tree.o 00:01:38.908 CC lib/lvol/lvol.o 00:01:38.908 LIB libspdk_bdev.a 00:01:39.168 SO libspdk_bdev.so.15.0 00:01:39.168 SYMLINK libspdk_bdev.so 00:01:39.427 CC lib/ftl/ftl_init.o 00:01:39.427 CC lib/ftl/ftl_core.o 00:01:39.427 CC lib/ftl/ftl_sb.o 00:01:39.427 CC lib/ftl/ftl_layout.o 00:01:39.427 CC lib/ftl/ftl_debug.o 00:01:39.427 CC lib/ftl/ftl_io.o 00:01:39.427 CC lib/ftl/ftl_l2p_flat.o 00:01:39.427 CC lib/ftl/ftl_l2p.o 00:01:39.427 CC lib/ftl/ftl_band.o 00:01:39.427 CC lib/ftl/ftl_band_ops.o 00:01:39.427 CC lib/ftl/ftl_nv_cache.o 00:01:39.427 CC lib/ftl/ftl_reloc.o 00:01:39.427 CC lib/ftl/ftl_writer.o 00:01:39.427 CC lib/ftl/ftl_rq.o 00:01:39.427 CC lib/ftl/ftl_l2p_cache.o 00:01:39.427 CC lib/ftl/ftl_p2l.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:39.427 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:39.427 CC lib/ftl/utils/ftl_conf.o 00:01:39.427 CC lib/ftl/utils/ftl_mempool.o 00:01:39.427 CC lib/ftl/utils/ftl_md.o 00:01:39.427 CC lib/ftl/utils/ftl_property.o 00:01:39.427 CC lib/ftl/utils/ftl_bitmap.o 00:01:39.427 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:39.427 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:39.427 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:39.427 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:39.427 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:39.427 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:39.427 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:39.427 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:39.427 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:39.427 CC lib/ftl/base/ftl_base_dev.o 00:01:39.427 CC lib/ftl/base/ftl_base_bdev.o 00:01:39.427 CC lib/ftl/ftl_trace.o 00:01:39.427 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:39.427 CC lib/nvmf/ctrlr.o 00:01:39.427 CC lib/nvmf/ctrlr_discovery.o 00:01:39.427 CC lib/nvmf/subsystem.o 00:01:39.427 CC lib/nvmf/ctrlr_bdev.o 00:01:39.427 CC lib/nvmf/nvmf_rpc.o 00:01:39.427 CC lib/nvmf/tcp.o 00:01:39.427 CC lib/nvmf/transport.o 00:01:39.427 CC lib/nvmf/rdma.o 00:01:39.427 CC lib/nvmf/nvmf.o 00:01:39.427 CC lib/scsi/dev.o 00:01:39.428 CC lib/scsi/lun.o 00:01:39.428 CC lib/scsi/port.o 00:01:39.428 CC lib/scsi/scsi_bdev.o 00:01:39.428 CC lib/scsi/scsi.o 00:01:39.428 CC lib/scsi/scsi_rpc.o 00:01:39.428 CC lib/scsi/task.o 00:01:39.428 CC lib/scsi/scsi_pr.o 00:01:39.428 CC lib/ublk/ublk.o 00:01:39.428 CC lib/ublk/ublk_rpc.o 00:01:39.428 CC lib/nbd/nbd.o 00:01:39.428 CC lib/nbd/nbd_rpc.o 00:01:39.687 LIB libspdk_blobfs.a 00:01:39.687 SO libspdk_blobfs.so.10.0 00:01:39.687 LIB libspdk_lvol.a 00:01:39.945 SO libspdk_lvol.so.10.0 00:01:39.945 SYMLINK libspdk_blobfs.so 00:01:39.945 SYMLINK libspdk_lvol.so 00:01:40.204 LIB libspdk_scsi.a 00:01:40.204 LIB libspdk_nbd.a 00:01:40.204 SO libspdk_scsi.so.9.0 00:01:40.204 SO libspdk_nbd.so.7.0 00:01:40.204 LIB libspdk_ublk.a 00:01:40.204 SYMLINK libspdk_nbd.so 00:01:40.204 SO libspdk_ublk.so.3.0 00:01:40.204 SYMLINK libspdk_scsi.so 00:01:40.204 SYMLINK libspdk_ublk.so 00:01:40.463 CC lib/vhost/vhost.o 00:01:40.463 CC lib/vhost/rte_vhost_user.o 00:01:40.463 CC lib/vhost/vhost_rpc.o 00:01:40.463 CC lib/vhost/vhost_scsi.o 00:01:40.463 CC lib/vhost/vhost_blk.o 00:01:40.463 CC lib/iscsi/conn.o 00:01:40.463 CC lib/iscsi/param.o 00:01:40.463 CC lib/iscsi/init_grp.o 00:01:40.463 CC lib/iscsi/iscsi.o 00:01:40.463 CC lib/iscsi/md5.o 00:01:40.463 CC lib/iscsi/portal_grp.o 00:01:40.463 CC lib/iscsi/tgt_node.o 00:01:40.463 CC lib/iscsi/iscsi_subsystem.o 00:01:40.463 CC lib/iscsi/iscsi_rpc.o 00:01:40.463 CC lib/iscsi/task.o 00:01:40.722 LIB libspdk_ftl.a 00:01:40.722 SO libspdk_ftl.so.9.0 00:01:40.982 SYMLINK libspdk_ftl.so 00:01:41.241 LIB libspdk_nvmf.a 00:01:41.501 SO libspdk_nvmf.so.18.0 00:01:41.501 LIB libspdk_vhost.a 00:01:41.501 SO libspdk_vhost.so.8.0 00:01:41.761 SYMLINK libspdk_nvmf.so 00:01:41.761 SYMLINK libspdk_vhost.so 00:01:42.020 LIB libspdk_iscsi.a 00:01:42.020 SO libspdk_iscsi.so.8.0 00:01:42.020 SYMLINK libspdk_iscsi.so 00:01:42.587 CC module/env_dpdk/env_dpdk_rpc.o 00:01:42.587 CC module/keyring/file/keyring.o 00:01:42.587 CC module/keyring/file/keyring_rpc.o 00:01:42.587 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:42.587 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:42.587 CC module/sock/posix/posix.o 00:01:42.587 CC module/accel/error/accel_error_rpc.o 00:01:42.587 CC module/accel/ioat/accel_ioat_rpc.o 00:01:42.587 CC module/accel/error/accel_error.o 00:01:42.587 CC module/accel/ioat/accel_ioat.o 00:01:42.587 CC module/blob/bdev/blob_bdev.o 00:01:42.587 CC module/accel/iaa/accel_iaa_rpc.o 00:01:42.587 CC module/accel/iaa/accel_iaa.o 00:01:42.587 CC module/accel/dsa/accel_dsa_rpc.o 00:01:42.587 CC module/accel/dsa/accel_dsa.o 00:01:42.587 CC module/scheduler/gscheduler/gscheduler.o 00:01:42.587 LIB libspdk_env_dpdk_rpc.a 00:01:42.587 SO libspdk_env_dpdk_rpc.so.6.0 00:01:42.587 SYMLINK libspdk_env_dpdk_rpc.so 00:01:42.587 LIB libspdk_scheduler_gscheduler.a 00:01:42.587 LIB libspdk_scheduler_dynamic.a 00:01:42.587 LIB libspdk_keyring_file.a 00:01:42.587 SO libspdk_scheduler_dynamic.so.4.0 00:01:42.587 LIB libspdk_scheduler_dpdk_governor.a 00:01:42.587 SO libspdk_scheduler_gscheduler.so.4.0 00:01:42.587 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:42.587 SO libspdk_keyring_file.so.1.0 00:01:42.587 LIB libspdk_accel_error.a 00:01:42.587 SYMLINK libspdk_scheduler_gscheduler.so 00:01:42.846 SYMLINK libspdk_scheduler_dynamic.so 00:01:42.846 LIB libspdk_accel_ioat.a 00:01:42.846 SO libspdk_accel_error.so.2.0 00:01:42.846 LIB libspdk_accel_iaa.a 00:01:42.846 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:42.846 SYMLINK libspdk_keyring_file.so 00:01:42.846 SO libspdk_accel_ioat.so.6.0 00:01:42.846 SO libspdk_accel_iaa.so.3.0 00:01:42.846 SYMLINK libspdk_accel_error.so 00:01:42.846 LIB libspdk_accel_dsa.a 00:01:42.846 LIB libspdk_blob_bdev.a 00:01:42.846 SYMLINK libspdk_accel_ioat.so 00:01:42.846 SO libspdk_accel_dsa.so.5.0 00:01:42.846 SO libspdk_blob_bdev.so.11.0 00:01:42.846 SYMLINK libspdk_accel_iaa.so 00:01:42.846 SYMLINK libspdk_blob_bdev.so 00:01:42.846 SYMLINK libspdk_accel_dsa.so 00:01:43.105 LIB libspdk_sock_posix.a 00:01:43.105 SO libspdk_sock_posix.so.6.0 00:01:43.105 CC module/bdev/error/vbdev_error.o 00:01:43.105 CC module/bdev/error/vbdev_error_rpc.o 00:01:43.105 CC module/bdev/null/bdev_null_rpc.o 00:01:43.105 CC module/bdev/null/bdev_null.o 00:01:43.106 CC module/bdev/passthru/vbdev_passthru.o 00:01:43.106 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:43.106 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:43.106 CC module/bdev/delay/vbdev_delay.o 00:01:43.106 CC module/bdev/lvol/vbdev_lvol.o 00:01:43.106 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:43.106 CC module/bdev/raid/bdev_raid.o 00:01:43.106 CC module/bdev/raid/bdev_raid_rpc.o 00:01:43.106 CC module/blobfs/bdev/blobfs_bdev.o 00:01:43.106 CC module/bdev/raid/bdev_raid_sb.o 00:01:43.106 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:43.106 CC module/bdev/gpt/gpt.o 00:01:43.106 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:43.106 CC module/bdev/raid/concat.o 00:01:43.106 CC module/bdev/gpt/vbdev_gpt.o 00:01:43.106 CC module/bdev/raid/raid1.o 00:01:43.106 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:43.106 CC module/bdev/raid/raid0.o 00:01:43.106 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:43.106 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:43.106 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:43.106 CC module/bdev/iscsi/bdev_iscsi.o 00:01:43.106 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:43.106 CC module/bdev/ftl/bdev_ftl.o 00:01:43.106 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:43.106 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:43.106 CC module/bdev/nvme/bdev_nvme.o 00:01:43.106 CC module/bdev/nvme/nvme_rpc.o 00:01:43.106 CC module/bdev/split/vbdev_split.o 00:01:43.106 CC module/bdev/split/vbdev_split_rpc.o 00:01:43.106 CC module/bdev/nvme/bdev_mdns_client.o 00:01:43.106 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:43.106 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:43.106 CC module/bdev/aio/bdev_aio.o 00:01:43.106 CC module/bdev/nvme/vbdev_opal.o 00:01:43.106 CC module/bdev/aio/bdev_aio_rpc.o 00:01:43.106 CC module/bdev/malloc/bdev_malloc.o 00:01:43.106 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:43.106 SYMLINK libspdk_sock_posix.so 00:01:43.364 LIB libspdk_blobfs_bdev.a 00:01:43.364 LIB libspdk_bdev_passthru.a 00:01:43.364 LIB libspdk_bdev_null.a 00:01:43.364 LIB libspdk_bdev_split.a 00:01:43.364 SO libspdk_blobfs_bdev.so.6.0 00:01:43.364 SO libspdk_bdev_null.so.6.0 00:01:43.364 SO libspdk_bdev_passthru.so.6.0 00:01:43.623 SO libspdk_bdev_split.so.6.0 00:01:43.623 LIB libspdk_bdev_error.a 00:01:43.623 SYMLINK libspdk_bdev_passthru.so 00:01:43.623 LIB libspdk_bdev_ftl.a 00:01:43.623 SYMLINK libspdk_bdev_split.so 00:01:43.623 SYMLINK libspdk_blobfs_bdev.so 00:01:43.623 LIB libspdk_bdev_iscsi.a 00:01:43.623 SYMLINK libspdk_bdev_null.so 00:01:43.623 LIB libspdk_bdev_gpt.a 00:01:43.623 LIB libspdk_bdev_delay.a 00:01:43.623 SO libspdk_bdev_ftl.so.6.0 00:01:43.623 SO libspdk_bdev_error.so.6.0 00:01:43.623 SO libspdk_bdev_iscsi.so.6.0 00:01:43.623 LIB libspdk_bdev_zone_block.a 00:01:43.623 LIB libspdk_bdev_malloc.a 00:01:43.623 SO libspdk_bdev_delay.so.6.0 00:01:43.623 SO libspdk_bdev_gpt.so.6.0 00:01:43.623 LIB libspdk_bdev_aio.a 00:01:43.623 SO libspdk_bdev_zone_block.so.6.0 00:01:43.623 SO libspdk_bdev_malloc.so.6.0 00:01:43.623 SYMLINK libspdk_bdev_error.so 00:01:43.623 SYMLINK libspdk_bdev_iscsi.so 00:01:43.623 SYMLINK libspdk_bdev_ftl.so 00:01:43.623 SO libspdk_bdev_aio.so.6.0 00:01:43.623 SYMLINK libspdk_bdev_delay.so 00:01:43.623 SYMLINK libspdk_bdev_gpt.so 00:01:43.623 SYMLINK libspdk_bdev_zone_block.so 00:01:43.623 SYMLINK libspdk_bdev_malloc.so 00:01:43.623 SYMLINK libspdk_bdev_aio.so 00:01:43.882 LIB libspdk_bdev_lvol.a 00:01:43.882 LIB libspdk_bdev_virtio.a 00:01:43.882 SO libspdk_bdev_lvol.so.6.0 00:01:43.882 SO libspdk_bdev_virtio.so.6.0 00:01:43.882 SYMLINK libspdk_bdev_virtio.so 00:01:43.882 SYMLINK libspdk_bdev_lvol.so 00:01:43.882 LIB libspdk_bdev_raid.a 00:01:44.140 SO libspdk_bdev_raid.so.6.0 00:01:44.140 SYMLINK libspdk_bdev_raid.so 00:01:44.707 LIB libspdk_bdev_nvme.a 00:01:44.966 SO libspdk_bdev_nvme.so.7.0 00:01:44.966 SYMLINK libspdk_bdev_nvme.so 00:01:45.580 CC module/event/subsystems/keyring/keyring.o 00:01:45.580 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.580 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.580 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.580 CC module/event/subsystems/vmd/vmd.o 00:01:45.580 CC module/event/subsystems/sock/sock.o 00:01:45.580 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:45.580 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.580 LIB libspdk_event_keyring.a 00:01:45.580 LIB libspdk_event_sock.a 00:01:45.580 LIB libspdk_event_vmd.a 00:01:45.580 LIB libspdk_event_vhost_blk.a 00:01:45.580 SO libspdk_event_keyring.so.1.0 00:01:45.580 LIB libspdk_event_scheduler.a 00:01:45.580 SO libspdk_event_sock.so.5.0 00:01:45.580 LIB libspdk_event_iobuf.a 00:01:45.580 SO libspdk_event_vhost_blk.so.3.0 00:01:45.580 SO libspdk_event_vmd.so.6.0 00:01:45.580 SO libspdk_event_scheduler.so.4.0 00:01:45.580 SO libspdk_event_iobuf.so.3.0 00:01:45.580 SYMLINK libspdk_event_keyring.so 00:01:45.580 SYMLINK libspdk_event_sock.so 00:01:45.580 SYMLINK libspdk_event_vmd.so 00:01:45.580 SYMLINK libspdk_event_scheduler.so 00:01:45.580 SYMLINK libspdk_event_vhost_blk.so 00:01:45.580 SYMLINK libspdk_event_iobuf.so 00:01:45.839 CC module/event/subsystems/accel/accel.o 00:01:46.098 LIB libspdk_event_accel.a 00:01:46.098 SO libspdk_event_accel.so.6.0 00:01:46.098 SYMLINK libspdk_event_accel.so 00:01:46.357 CC module/event/subsystems/bdev/bdev.o 00:01:46.357 LIB libspdk_event_bdev.a 00:01:46.357 SO libspdk_event_bdev.so.6.0 00:01:46.614 SYMLINK libspdk_event_bdev.so 00:01:46.874 CC module/event/subsystems/nbd/nbd.o 00:01:46.874 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:46.874 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:46.874 CC module/event/subsystems/scsi/scsi.o 00:01:46.874 CC module/event/subsystems/ublk/ublk.o 00:01:46.874 LIB libspdk_event_nbd.a 00:01:46.874 LIB libspdk_event_ublk.a 00:01:46.874 SO libspdk_event_nbd.so.6.0 00:01:46.874 LIB libspdk_event_scsi.a 00:01:46.874 SO libspdk_event_ublk.so.3.0 00:01:46.874 SO libspdk_event_scsi.so.6.0 00:01:46.874 SYMLINK libspdk_event_nbd.so 00:01:46.874 LIB libspdk_event_nvmf.a 00:01:46.874 SYMLINK libspdk_event_ublk.so 00:01:46.874 SYMLINK libspdk_event_scsi.so 00:01:47.132 SO libspdk_event_nvmf.so.6.0 00:01:47.132 SYMLINK libspdk_event_nvmf.so 00:01:47.132 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.132 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.390 LIB libspdk_event_vhost_scsi.a 00:01:47.390 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.390 LIB libspdk_event_iscsi.a 00:01:47.390 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.390 SO libspdk_event_iscsi.so.6.0 00:01:47.390 SYMLINK libspdk_event_iscsi.so 00:01:47.648 SO libspdk.so.6.0 00:01:47.648 SYMLINK libspdk.so 00:01:47.908 CC app/spdk_nvme_perf/perf.o 00:01:47.908 CC app/trace_record/trace_record.o 00:01:47.908 CXX app/trace/trace.o 00:01:47.908 CC app/spdk_lspci/spdk_lspci.o 00:01:47.908 CC app/spdk_nvme_discover/discovery_aer.o 00:01:47.908 CC app/spdk_top/spdk_top.o 00:01:47.908 CC app/spdk_nvme_identify/identify.o 00:01:47.908 CC app/iscsi_tgt/iscsi_tgt.o 00:01:47.908 TEST_HEADER include/spdk/accel.h 00:01:47.908 TEST_HEADER include/spdk/accel_module.h 00:01:47.908 TEST_HEADER include/spdk/assert.h 00:01:47.908 TEST_HEADER include/spdk/barrier.h 00:01:47.908 CC app/nvmf_tgt/nvmf_main.o 00:01:47.908 TEST_HEADER include/spdk/bdev_module.h 00:01:47.908 TEST_HEADER include/spdk/base64.h 00:01:47.908 TEST_HEADER include/spdk/bdev.h 00:01:47.908 TEST_HEADER include/spdk/bdev_zone.h 00:01:47.908 TEST_HEADER include/spdk/bit_array.h 00:01:47.908 TEST_HEADER include/spdk/blob_bdev.h 00:01:47.908 TEST_HEADER include/spdk/bit_pool.h 00:01:47.908 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:47.908 TEST_HEADER include/spdk/blob.h 00:01:47.908 TEST_HEADER include/spdk/blobfs.h 00:01:47.908 TEST_HEADER include/spdk/conf.h 00:01:47.908 CC app/spdk_dd/spdk_dd.o 00:01:47.908 TEST_HEADER include/spdk/config.h 00:01:47.908 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:47.908 TEST_HEADER include/spdk/cpuset.h 00:01:47.908 TEST_HEADER include/spdk/crc16.h 00:01:47.908 TEST_HEADER include/spdk/crc32.h 00:01:47.908 TEST_HEADER include/spdk/crc64.h 00:01:47.908 TEST_HEADER include/spdk/dif.h 00:01:47.908 TEST_HEADER include/spdk/endian.h 00:01:47.908 TEST_HEADER include/spdk/env_dpdk.h 00:01:47.908 TEST_HEADER include/spdk/dma.h 00:01:47.908 CC app/vhost/vhost.o 00:01:47.908 CC test/rpc_client/rpc_client_test.o 00:01:47.908 TEST_HEADER include/spdk/env.h 00:01:47.908 TEST_HEADER include/spdk/fd_group.h 00:01:47.908 TEST_HEADER include/spdk/event.h 00:01:47.908 TEST_HEADER include/spdk/fd.h 00:01:47.908 TEST_HEADER include/spdk/file.h 00:01:47.908 TEST_HEADER include/spdk/ftl.h 00:01:47.908 TEST_HEADER include/spdk/gpt_spec.h 00:01:47.908 TEST_HEADER include/spdk/hexlify.h 00:01:47.908 TEST_HEADER include/spdk/histogram_data.h 00:01:47.908 CC app/spdk_tgt/spdk_tgt.o 00:01:47.908 TEST_HEADER include/spdk/idxd.h 00:01:47.908 TEST_HEADER include/spdk/idxd_spec.h 00:01:47.908 TEST_HEADER include/spdk/init.h 00:01:47.908 TEST_HEADER include/spdk/ioat.h 00:01:47.908 TEST_HEADER include/spdk/ioat_spec.h 00:01:47.908 TEST_HEADER include/spdk/iscsi_spec.h 00:01:47.908 TEST_HEADER include/spdk/json.h 00:01:47.908 TEST_HEADER include/spdk/jsonrpc.h 00:01:47.908 TEST_HEADER include/spdk/keyring.h 00:01:47.908 TEST_HEADER include/spdk/keyring_module.h 00:01:47.908 TEST_HEADER include/spdk/likely.h 00:01:47.908 TEST_HEADER include/spdk/lvol.h 00:01:47.908 TEST_HEADER include/spdk/log.h 00:01:47.908 TEST_HEADER include/spdk/mmio.h 00:01:47.908 TEST_HEADER include/spdk/memory.h 00:01:47.908 TEST_HEADER include/spdk/nbd.h 00:01:47.908 TEST_HEADER include/spdk/notify.h 00:01:47.908 TEST_HEADER include/spdk/nvme.h 00:01:47.908 TEST_HEADER include/spdk/nvme_intel.h 00:01:47.908 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:47.908 TEST_HEADER include/spdk/nvme_spec.h 00:01:47.908 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:47.908 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:47.908 TEST_HEADER include/spdk/nvme_zns.h 00:01:47.908 TEST_HEADER include/spdk/nvmf_spec.h 00:01:47.908 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:47.908 TEST_HEADER include/spdk/nvmf_transport.h 00:01:47.908 TEST_HEADER include/spdk/nvmf.h 00:01:47.908 TEST_HEADER include/spdk/pci_ids.h 00:01:47.908 TEST_HEADER include/spdk/pipe.h 00:01:47.908 TEST_HEADER include/spdk/opal.h 00:01:47.908 TEST_HEADER include/spdk/opal_spec.h 00:01:47.908 TEST_HEADER include/spdk/queue.h 00:01:47.908 TEST_HEADER include/spdk/reduce.h 00:01:48.172 TEST_HEADER include/spdk/rpc.h 00:01:48.172 TEST_HEADER include/spdk/scsi.h 00:01:48.172 TEST_HEADER include/spdk/scheduler.h 00:01:48.172 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.172 TEST_HEADER include/spdk/stdinc.h 00:01:48.172 TEST_HEADER include/spdk/sock.h 00:01:48.172 TEST_HEADER include/spdk/string.h 00:01:48.172 TEST_HEADER include/spdk/thread.h 00:01:48.172 TEST_HEADER include/spdk/trace.h 00:01:48.172 TEST_HEADER include/spdk/trace_parser.h 00:01:48.172 TEST_HEADER include/spdk/tree.h 00:01:48.172 TEST_HEADER include/spdk/util.h 00:01:48.172 TEST_HEADER include/spdk/uuid.h 00:01:48.172 TEST_HEADER include/spdk/ublk.h 00:01:48.172 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.172 TEST_HEADER include/spdk/version.h 00:01:48.172 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.172 TEST_HEADER include/spdk/vhost.h 00:01:48.172 TEST_HEADER include/spdk/vmd.h 00:01:48.172 TEST_HEADER include/spdk/xor.h 00:01:48.172 TEST_HEADER include/spdk/zipf.h 00:01:48.172 CXX test/cpp_headers/accel.o 00:01:48.172 CXX test/cpp_headers/assert.o 00:01:48.172 CXX test/cpp_headers/base64.o 00:01:48.172 CXX test/cpp_headers/accel_module.o 00:01:48.172 CXX test/cpp_headers/bdev.o 00:01:48.172 CXX test/cpp_headers/barrier.o 00:01:48.172 CXX test/cpp_headers/bdev_module.o 00:01:48.172 CXX test/cpp_headers/bdev_zone.o 00:01:48.172 CXX test/cpp_headers/bit_array.o 00:01:48.172 CXX test/cpp_headers/bit_pool.o 00:01:48.172 CXX test/cpp_headers/blobfs_bdev.o 00:01:48.172 CXX test/cpp_headers/blobfs.o 00:01:48.172 CXX test/cpp_headers/blob_bdev.o 00:01:48.172 CXX test/cpp_headers/conf.o 00:01:48.172 CXX test/cpp_headers/blob.o 00:01:48.172 CXX test/cpp_headers/config.o 00:01:48.172 CXX test/cpp_headers/crc32.o 00:01:48.172 CXX test/cpp_headers/crc16.o 00:01:48.172 CXX test/cpp_headers/cpuset.o 00:01:48.172 CXX test/cpp_headers/crc64.o 00:01:48.172 CXX test/cpp_headers/dif.o 00:01:48.172 CXX test/cpp_headers/endian.o 00:01:48.172 CXX test/cpp_headers/env_dpdk.o 00:01:48.172 CXX test/cpp_headers/dma.o 00:01:48.172 CXX test/cpp_headers/fd_group.o 00:01:48.172 CXX test/cpp_headers/env.o 00:01:48.172 CXX test/cpp_headers/event.o 00:01:48.172 CXX test/cpp_headers/fd.o 00:01:48.172 CXX test/cpp_headers/file.o 00:01:48.172 CXX test/cpp_headers/ftl.o 00:01:48.172 CXX test/cpp_headers/gpt_spec.o 00:01:48.172 CXX test/cpp_headers/hexlify.o 00:01:48.172 CXX test/cpp_headers/histogram_data.o 00:01:48.172 CXX test/cpp_headers/idxd_spec.o 00:01:48.172 CXX test/cpp_headers/idxd.o 00:01:48.172 CXX test/cpp_headers/ioat.o 00:01:48.172 CXX test/cpp_headers/init.o 00:01:48.172 CXX test/cpp_headers/iscsi_spec.o 00:01:48.172 CXX test/cpp_headers/ioat_spec.o 00:01:48.172 CXX test/cpp_headers/jsonrpc.o 00:01:48.172 CXX test/cpp_headers/keyring_module.o 00:01:48.172 CXX test/cpp_headers/json.o 00:01:48.172 CXX test/cpp_headers/keyring.o 00:01:48.172 CXX test/cpp_headers/log.o 00:01:48.172 CXX test/cpp_headers/likely.o 00:01:48.172 CXX test/cpp_headers/lvol.o 00:01:48.172 CXX test/cpp_headers/memory.o 00:01:48.172 CXX test/cpp_headers/mmio.o 00:01:48.172 CXX test/cpp_headers/nbd.o 00:01:48.172 CXX test/cpp_headers/notify.o 00:01:48.172 CXX test/cpp_headers/nvme.o 00:01:48.172 CXX test/cpp_headers/nvme_intel.o 00:01:48.172 CXX test/cpp_headers/nvme_ocssd.o 00:01:48.172 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:48.172 CC examples/sock/hello_world/hello_sock.o 00:01:48.172 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:48.172 CC test/env/pci/pci_ut.o 00:01:48.172 CC test/event/reactor/reactor.o 00:01:48.172 CC test/thread/poller_perf/poller_perf.o 00:01:48.172 CC test/event/event_perf/event_perf.o 00:01:48.172 CC app/fio/nvme/fio_plugin.o 00:01:48.172 CC examples/idxd/perf/perf.o 00:01:48.443 CC examples/util/zipf/zipf.o 00:01:48.443 CC test/nvme/fdp/fdp.o 00:01:48.443 CC test/env/memory/memory_ut.o 00:01:48.443 CC test/app/jsoncat/jsoncat.o 00:01:48.443 CC test/bdev/bdevio/bdevio.o 00:01:48.443 CC test/nvme/aer/aer.o 00:01:48.443 CC test/nvme/e2edp/nvme_dp.o 00:01:48.443 CC examples/ioat/verify/verify.o 00:01:48.443 CC examples/ioat/perf/perf.o 00:01:48.443 CC test/env/vtophys/vtophys.o 00:01:48.443 CC examples/nvme/hotplug/hotplug.o 00:01:48.443 CC test/nvme/simple_copy/simple_copy.o 00:01:48.443 CC test/nvme/overhead/overhead.o 00:01:48.443 CC test/app/histogram_perf/histogram_perf.o 00:01:48.443 CC test/event/app_repeat/app_repeat.o 00:01:48.443 CC test/accel/dif/dif.o 00:01:48.443 CC test/dma/test_dma/test_dma.o 00:01:48.443 CC examples/thread/thread/thread_ex.o 00:01:48.443 CC examples/nvme/reconnect/reconnect.o 00:01:48.443 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.443 CC examples/vmd/lsvmd/lsvmd.o 00:01:48.443 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:48.443 CC examples/nvme/abort/abort.o 00:01:48.443 CC examples/vmd/led/led.o 00:01:48.443 CC test/nvme/sgl/sgl.o 00:01:48.443 CC test/nvme/connect_stress/connect_stress.o 00:01:48.443 CC test/nvme/fused_ordering/fused_ordering.o 00:01:48.443 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.443 CC test/app/stub/stub.o 00:01:48.443 CC test/blobfs/mkfs/mkfs.o 00:01:48.443 CC test/nvme/boot_partition/boot_partition.o 00:01:48.443 CC examples/blob/hello_world/hello_blob.o 00:01:48.443 CC examples/nvme/hello_world/hello_world.o 00:01:48.443 CC test/nvme/startup/startup.o 00:01:48.443 CC test/event/reactor_perf/reactor_perf.o 00:01:48.443 CC test/nvme/err_injection/err_injection.o 00:01:48.443 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.443 CC test/nvme/reset/reset.o 00:01:48.443 CC test/app/bdev_svc/bdev_svc.o 00:01:48.443 CC test/nvme/reserve/reserve.o 00:01:48.443 CC examples/bdev/hello_world/hello_bdev.o 00:01:48.443 CC examples/bdev/bdevperf/bdevperf.o 00:01:48.443 CC examples/nvme/arbitration/arbitration.o 00:01:48.443 CC test/nvme/cuse/cuse.o 00:01:48.443 CC examples/accel/perf/accel_perf.o 00:01:48.443 CC test/nvme/compliance/nvme_compliance.o 00:01:48.443 CC test/event/scheduler/scheduler.o 00:01:48.443 CC examples/blob/cli/blobcli.o 00:01:48.443 CC app/fio/bdev/fio_plugin.o 00:01:48.443 CC examples/nvmf/nvmf/nvmf.o 00:01:48.443 LINK spdk_trace_record 00:01:48.443 LINK iscsi_tgt 00:01:48.443 LINK spdk_nvme_discover 00:01:48.709 LINK spdk_lspci 00:01:48.709 LINK interrupt_tgt 00:01:48.969 CC test/env/mem_callbacks/mem_callbacks.o 00:01:48.969 LINK spdk_tgt 00:01:48.969 CC test/lvol/esnap/esnap.o 00:01:48.969 LINK reactor 00:01:48.969 LINK nvmf_tgt 00:01:48.969 LINK jsoncat 00:01:48.969 LINK vhost 00:01:48.969 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:48.969 LINK event_perf 00:01:48.969 LINK connect_stress 00:01:48.969 LINK led 00:01:48.969 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:48.969 LINK hello_sock 00:01:48.969 LINK lsvmd 00:01:48.969 LINK env_dpdk_post_init 00:01:48.969 LINK hello_blob 00:01:48.969 LINK vtophys 00:01:48.969 LINK rpc_client_test 00:01:48.969 CXX test/cpp_headers/nvme_spec.o 00:01:48.969 LINK boot_partition 00:01:49.230 CXX test/cpp_headers/nvme_zns.o 00:01:49.230 LINK startup 00:01:49.230 LINK cmb_copy 00:01:49.230 LINK pmr_persistence 00:01:49.230 CXX test/cpp_headers/nvmf_cmd.o 00:01:49.230 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:49.230 LINK bdev_svc 00:01:49.230 LINK spdk_dd 00:01:49.230 CXX test/cpp_headers/nvmf.o 00:01:49.230 LINK poller_perf 00:01:49.230 CXX test/cpp_headers/nvmf_spec.o 00:01:49.230 CXX test/cpp_headers/nvmf_transport.o 00:01:49.230 CXX test/cpp_headers/opal.o 00:01:49.230 CXX test/cpp_headers/pci_ids.o 00:01:49.230 CXX test/cpp_headers/opal_spec.o 00:01:49.230 LINK err_injection 00:01:49.230 CXX test/cpp_headers/queue.o 00:01:49.230 CXX test/cpp_headers/pipe.o 00:01:49.230 CXX test/cpp_headers/reduce.o 00:01:49.230 LINK verify 00:01:49.230 CXX test/cpp_headers/rpc.o 00:01:49.230 CXX test/cpp_headers/scheduler.o 00:01:49.230 CXX test/cpp_headers/scsi.o 00:01:49.230 CXX test/cpp_headers/scsi_spec.o 00:01:49.230 CXX test/cpp_headers/sock.o 00:01:49.230 CXX test/cpp_headers/string.o 00:01:49.230 CXX test/cpp_headers/thread.o 00:01:49.231 CXX test/cpp_headers/stdinc.o 00:01:49.231 CXX test/cpp_headers/trace.o 00:01:49.231 CXX test/cpp_headers/trace_parser.o 00:01:49.231 CXX test/cpp_headers/tree.o 00:01:49.231 LINK app_repeat 00:01:49.231 CXX test/cpp_headers/ublk.o 00:01:49.231 CXX test/cpp_headers/util.o 00:01:49.231 LINK histogram_perf 00:01:49.231 LINK reactor_perf 00:01:49.231 LINK aer 00:01:49.231 CXX test/cpp_headers/uuid.o 00:01:49.231 CXX test/cpp_headers/version.o 00:01:49.231 LINK hello_bdev 00:01:49.231 CXX test/cpp_headers/vfio_user_pci.o 00:01:49.231 CXX test/cpp_headers/vfio_user_spec.o 00:01:49.231 CXX test/cpp_headers/vhost.o 00:01:49.231 CXX test/cpp_headers/vmd.o 00:01:49.231 LINK fdp 00:01:49.231 CXX test/cpp_headers/xor.o 00:01:49.231 LINK mkfs 00:01:49.231 CXX test/cpp_headers/zipf.o 00:01:49.231 LINK doorbell_aers 00:01:49.231 LINK simple_copy 00:01:49.495 LINK stub 00:01:49.495 LINK spdk_trace 00:01:49.495 LINK hotplug 00:01:49.495 LINK zipf 00:01:49.495 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:49.495 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.495 LINK bdevio 00:01:49.495 LINK scheduler 00:01:49.495 LINK idxd_perf 00:01:49.495 LINK hello_world 00:01:49.495 LINK reserve 00:01:49.495 LINK ioat_perf 00:01:49.495 LINK pci_ut 00:01:49.495 LINK fused_ordering 00:01:49.495 LINK thread 00:01:49.495 LINK overhead 00:01:49.495 LINK nvme_dp 00:01:49.495 LINK reset 00:01:49.495 LINK arbitration 00:01:49.495 LINK sgl 00:01:49.495 LINK abort 00:01:49.753 LINK nvme_manage 00:01:49.753 LINK test_dma 00:01:49.753 LINK nvmf 00:01:49.753 LINK nvme_compliance 00:01:49.753 LINK reconnect 00:01:49.753 LINK mem_callbacks 00:01:49.753 LINK accel_perf 00:01:49.753 LINK spdk_top 00:01:50.011 LINK bdevperf 00:01:50.011 LINK dif 00:01:50.011 LINK blobcli 00:01:50.011 LINK spdk_nvme 00:01:50.011 LINK nvme_fuzz 00:01:50.011 LINK spdk_bdev 00:01:50.011 LINK spdk_nvme_perf 00:01:50.011 LINK vhost_fuzz 00:01:50.011 LINK memory_ut 00:01:50.011 LINK spdk_nvme_identify 00:01:50.270 LINK cuse 00:01:50.837 LINK iscsi_fuzz 00:01:52.739 LINK esnap 00:01:53.007 00:01:53.008 real 0m38.683s 00:01:53.008 user 6m2.302s 00:01:53.008 sys 5m20.709s 00:01:53.008 21:02:47 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:53.008 21:02:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.008 ************************************ 00:01:53.008 END TEST make 00:01:53.008 ************************************ 00:01:53.008 21:02:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:53.008 21:02:47 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:53.008 21:02:47 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:53.008 21:02:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:53.008 21:02:47 -- pm/common@45 -- $ pid=1101653 00:01:53.008 21:02:47 -- pm/common@52 -- $ sudo kill -TERM 1101653 00:01:53.008 21:02:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:53.008 21:02:47 -- pm/common@45 -- $ pid=1101654 00:01:53.008 21:02:47 -- pm/common@52 -- $ sudo kill -TERM 1101654 00:01:53.008 21:02:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:53.008 21:02:47 -- pm/common@45 -- $ pid=1101662 00:01:53.008 21:02:47 -- pm/common@52 -- $ sudo kill -TERM 1101662 00:01:53.008 21:02:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:53.008 21:02:47 -- pm/common@45 -- $ pid=1101655 00:01:53.008 21:02:47 -- pm/common@52 -- $ sudo kill -TERM 1101655 00:01:53.008 21:02:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.008 21:02:47 -- nvmf/common.sh@7 -- # uname -s 00:01:53.008 21:02:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.008 21:02:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.008 21:02:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.008 21:02:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.008 21:02:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.008 21:02:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.008 21:02:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.008 21:02:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.008 21:02:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.008 21:02:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.008 21:02:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:53.008 21:02:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:01:53.008 21:02:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.008 21:02:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.008 21:02:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:01:53.008 21:02:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.008 21:02:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:53.008 21:02:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.008 21:02:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.008 21:02:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.008 21:02:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.008 21:02:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.008 21:02:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.008 21:02:47 -- paths/export.sh@5 -- # export PATH 00:01:53.008 21:02:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.008 21:02:47 -- nvmf/common.sh@47 -- # : 0 00:01:53.008 21:02:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.008 21:02:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.008 21:02:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.008 21:02:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.008 21:02:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.008 21:02:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.008 21:02:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.008 21:02:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.008 21:02:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.008 21:02:47 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.008 21:02:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.008 21:02:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.008 21:02:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:53.008 21:02:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.008 21:02:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:53.008 21:02:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.008 21:02:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.008 21:02:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.008 21:02:47 -- spdk/autotest.sh@48 -- # udevadm_pid=1160414 00:01:53.008 21:02:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.008 21:02:47 -- pm/common@17 -- # local monitor 00:01:53.008 21:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.008 21:02:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1160415 00:01:53.008 21:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1160416 00:01:53.008 21:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1160417 00:01:53.008 21:02:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.008 21:02:47 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1160419 00:01:53.008 21:02:47 -- pm/common@26 -- # sleep 1 00:01:53.272 21:02:47 -- pm/common@21 -- # date +%s 00:01:53.272 21:02:47 -- pm/common@21 -- # date +%s 00:01:53.272 21:02:47 -- pm/common@21 -- # date +%s 00:01:53.272 21:02:47 -- pm/common@21 -- # date +%s 00:01:53.272 21:02:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713898967 00:01:53.272 21:02:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713898967 00:01:53.272 21:02:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713898967 00:01:53.272 21:02:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713898967 00:01:53.272 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713898967_collect-bmc-pm.bmc.pm.log 00:01:53.272 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713898967_collect-vmstat.pm.log 00:01:53.272 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713898967_collect-cpu-temp.pm.log 00:01:53.272 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713898967_collect-cpu-load.pm.log 00:01:54.206 21:02:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.206 21:02:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.206 21:02:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:54.206 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:01:54.206 21:02:48 -- spdk/autotest.sh@59 -- # create_test_list 00:01:54.206 21:02:48 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:54.206 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:01:54.206 21:02:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:01:54.206 21:02:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:54.206 21:02:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:54.206 21:02:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:54.206 21:02:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:54.206 21:02:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:54.206 21:02:48 -- common/autotest_common.sh@1441 -- # uname 00:01:54.206 21:02:48 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:54.206 21:02:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:54.206 21:02:48 -- common/autotest_common.sh@1461 -- # uname 00:01:54.206 21:02:48 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:54.206 21:02:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:54.206 21:02:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:54.206 21:02:48 -- spdk/autotest.sh@72 -- # hash lcov 00:01:54.206 21:02:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:54.206 21:02:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:54.206 --rc lcov_branch_coverage=1 00:01:54.206 --rc lcov_function_coverage=1 00:01:54.206 --rc genhtml_branch_coverage=1 00:01:54.206 --rc genhtml_function_coverage=1 00:01:54.206 --rc genhtml_legend=1 00:01:54.206 --rc geninfo_all_blocks=1 00:01:54.206 ' 00:01:54.206 21:02:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:54.206 --rc lcov_branch_coverage=1 00:01:54.206 --rc lcov_function_coverage=1 00:01:54.206 --rc genhtml_branch_coverage=1 00:01:54.206 --rc genhtml_function_coverage=1 00:01:54.206 --rc genhtml_legend=1 00:01:54.206 --rc geninfo_all_blocks=1 00:01:54.206 ' 00:01:54.206 21:02:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:54.206 --rc lcov_branch_coverage=1 00:01:54.206 --rc lcov_function_coverage=1 00:01:54.206 --rc genhtml_branch_coverage=1 00:01:54.206 --rc genhtml_function_coverage=1 00:01:54.206 --rc genhtml_legend=1 00:01:54.206 --rc geninfo_all_blocks=1 00:01:54.206 --no-external' 00:01:54.206 21:02:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:54.206 --rc lcov_branch_coverage=1 00:01:54.206 --rc lcov_function_coverage=1 00:01:54.206 --rc genhtml_branch_coverage=1 00:01:54.206 --rc genhtml_function_coverage=1 00:01:54.206 --rc genhtml_legend=1 00:01:54.206 --rc geninfo_all_blocks=1 00:01:54.206 --no-external' 00:01:54.206 21:02:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:54.206 lcov: LCOV version 1.14 00:01:54.206 21:02:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:01:58.387 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:58.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:58.387 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:58.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:58.387 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:58.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:01.667 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:01.667 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:08.232 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:08.232 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:08.233 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:08.233 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:09.612 21:03:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:09.612 21:03:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:09.612 21:03:03 -- common/autotest_common.sh@10 -- # set +x 00:02:09.612 21:03:03 -- spdk/autotest.sh@91 -- # rm -f 00:02:09.612 21:03:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:12.911 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:02:12.911 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:02:12.911 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:02:12.911 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:02:12.911 21:03:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:12.911 21:03:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:12.911 21:03:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:12.911 21:03:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:12.911 21:03:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:12.911 21:03:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:12.911 21:03:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:12.911 21:03:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:12.911 21:03:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:12.911 21:03:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:12.911 21:03:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:12.911 21:03:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:12.911 21:03:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:12.911 21:03:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:12.911 21:03:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:12.911 21:03:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:12.911 21:03:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:12.911 21:03:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:12.911 21:03:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:12.911 21:03:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:12.911 No valid GPT data, bailing 00:02:12.911 21:03:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:12.911 21:03:07 -- scripts/common.sh@391 -- # pt= 00:02:12.911 21:03:07 -- scripts/common.sh@392 -- # return 1 00:02:12.911 21:03:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:12.911 1+0 records in 00:02:12.911 1+0 records out 00:02:12.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00280623 s, 374 MB/s 00:02:12.911 21:03:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:12.911 21:03:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:12.911 21:03:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:12.911 21:03:07 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:12.911 21:03:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:12.911 No valid GPT data, bailing 00:02:12.911 21:03:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:13.171 21:03:07 -- scripts/common.sh@391 -- # pt= 00:02:13.172 21:03:07 -- scripts/common.sh@392 -- # return 1 00:02:13.172 21:03:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:13.172 1+0 records in 00:02:13.172 1+0 records out 00:02:13.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00300912 s, 348 MB/s 00:02:13.172 21:03:07 -- spdk/autotest.sh@118 -- # sync 00:02:13.172 21:03:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:13.172 21:03:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:13.172 21:03:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:18.483 21:03:11 -- spdk/autotest.sh@124 -- # uname -s 00:02:18.483 21:03:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:18.484 21:03:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:18.484 21:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:18.484 21:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:18.484 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:02:18.484 ************************************ 00:02:18.484 START TEST setup.sh 00:02:18.484 ************************************ 00:02:18.484 21:03:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:02:18.484 * Looking for test storage... 00:02:18.484 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:18.484 21:03:11 -- setup/test-setup.sh@10 -- # uname -s 00:02:18.484 21:03:11 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:18.484 21:03:11 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:18.484 21:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:18.484 21:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:18.484 21:03:11 -- common/autotest_common.sh@10 -- # set +x 00:02:18.484 ************************************ 00:02:18.484 START TEST acl 00:02:18.484 ************************************ 00:02:18.484 21:03:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:02:18.484 * Looking for test storage... 00:02:18.484 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:18.484 21:03:12 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:18.484 21:03:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:18.484 21:03:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:18.484 21:03:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:18.484 21:03:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:18.484 21:03:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:18.484 21:03:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:18.484 21:03:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:18.484 21:03:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:18.484 21:03:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:18.484 21:03:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:18.484 21:03:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:18.484 21:03:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:18.484 21:03:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:18.484 21:03:12 -- setup/acl.sh@12 -- # devs=() 00:02:18.484 21:03:12 -- setup/acl.sh@12 -- # declare -a devs 00:02:18.484 21:03:12 -- setup/acl.sh@13 -- # drivers=() 00:02:18.484 21:03:12 -- setup/acl.sh@13 -- # declare -A drivers 00:02:18.484 21:03:12 -- setup/acl.sh@51 -- # setup reset 00:02:18.484 21:03:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:18.484 21:03:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.767 21:03:15 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:21.767 21:03:15 -- setup/acl.sh@16 -- # local dev driver 00:02:21.767 21:03:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.767 21:03:15 -- setup/acl.sh@15 -- # setup output status 00:02:21.767 21:03:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:21.767 21:03:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:24.298 Hugepages 00:02:24.298 node hugesize free / total 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 00:02:24.298 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:03:00.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:24.298 21:03:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:24.298 21:03:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:24.298 21:03:18 -- setup/acl.sh@20 -- # continue 00:02:24.298 21:03:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.298 21:03:18 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:02:24.298 21:03:18 -- setup/acl.sh@54 -- # run_test denied denied 00:02:24.298 21:03:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:24.298 21:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:24.298 21:03:18 -- common/autotest_common.sh@10 -- # set +x 00:02:24.298 ************************************ 00:02:24.298 START TEST denied 00:02:24.298 ************************************ 00:02:24.298 21:03:18 -- common/autotest_common.sh@1111 -- # denied 00:02:24.298 21:03:18 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:03:00.0' 00:02:24.298 21:03:18 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:03:00.0' 00:02:24.298 21:03:18 -- setup/acl.sh@38 -- # setup output config 00:02:24.298 21:03:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.298 21:03:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:28.485 0000:03:00.0 (1344 51c3): Skipping denied controller at 0000:03:00.0 00:02:28.485 21:03:22 -- setup/acl.sh@40 -- # verify 0000:03:00.0 00:02:28.485 21:03:22 -- setup/acl.sh@28 -- # local dev driver 00:02:28.485 21:03:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:28.485 21:03:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:03:00.0 ]] 00:02:28.485 21:03:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:03:00.0/driver 00:02:28.485 21:03:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:28.485 21:03:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:28.485 21:03:22 -- setup/acl.sh@41 -- # setup reset 00:02:28.485 21:03:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.485 21:03:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.692 00:02:32.692 real 0m7.869s 00:02:32.692 user 0m2.015s 00:02:32.692 sys 0m3.712s 00:02:32.692 21:03:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:32.692 21:03:26 -- common/autotest_common.sh@10 -- # set +x 00:02:32.692 ************************************ 00:02:32.692 END TEST denied 00:02:32.692 ************************************ 00:02:32.692 21:03:26 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:32.692 21:03:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:32.692 21:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:32.692 21:03:26 -- common/autotest_common.sh@10 -- # set +x 00:02:32.692 ************************************ 00:02:32.692 START TEST allowed 00:02:32.692 ************************************ 00:02:32.692 21:03:26 -- common/autotest_common.sh@1111 -- # allowed 00:02:32.692 21:03:26 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:03:00.0 00:02:32.692 21:03:26 -- setup/acl.sh@46 -- # grep -E '0000:03:00.0 .*: nvme -> .*' 00:02:32.692 21:03:26 -- setup/acl.sh@45 -- # setup output config 00:02:32.692 21:03:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.692 21:03:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:35.980 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:35.980 21:03:29 -- setup/acl.sh@47 -- # verify 0000:c9:00.0 00:02:35.980 21:03:29 -- setup/acl.sh@28 -- # local dev driver 00:02:35.980 21:03:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:35.980 21:03:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:35.980 21:03:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:35.980 21:03:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:35.980 21:03:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:35.980 21:03:29 -- setup/acl.sh@48 -- # setup reset 00:02:35.980 21:03:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.980 21:03:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:39.268 00:02:39.268 real 0m6.712s 00:02:39.268 user 0m1.921s 00:02:39.268 sys 0m3.687s 00:02:39.268 21:03:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.268 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:02:39.268 ************************************ 00:02:39.268 END TEST allowed 00:02:39.268 ************************************ 00:02:39.268 00:02:39.268 real 0m21.081s 00:02:39.268 user 0m6.183s 00:02:39.268 sys 0m11.501s 00:02:39.268 21:03:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.268 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:02:39.268 ************************************ 00:02:39.268 END TEST acl 00:02:39.268 ************************************ 00:02:39.268 21:03:33 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.268 21:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.268 21:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.268 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:02:39.268 ************************************ 00:02:39.268 START TEST hugepages 00:02:39.268 ************************************ 00:02:39.268 21:03:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:39.268 * Looking for test storage... 00:02:39.268 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:39.268 21:03:33 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:39.268 21:03:33 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:39.268 21:03:33 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:39.268 21:03:33 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:39.268 21:03:33 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:39.268 21:03:33 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:39.268 21:03:33 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:39.268 21:03:33 -- setup/common.sh@18 -- # local node= 00:02:39.268 21:03:33 -- setup/common.sh@19 -- # local var val 00:02:39.268 21:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.268 21:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.268 21:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.268 21:03:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.268 21:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.268 21:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.268 21:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 104381124 kB' 'MemAvailable: 109094408 kB' 'Buffers: 2800 kB' 'Cached: 13375556 kB' 'SwapCached: 0 kB' 'Active: 9407356 kB' 'Inactive: 4601764 kB' 'Active(anon): 8835996 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639880 kB' 'Mapped: 191776 kB' 'Shmem: 8205232 kB' 'KReclaimable: 581504 kB' 'Slab: 1300684 kB' 'SReclaimable: 581504 kB' 'SUnreclaim: 719180 kB' 'KernelStack: 25440 kB' 'PageTables: 10608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69510444 kB' 'Committed_AS: 10490636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231112 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:39.268 21:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.268 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.268 21:03:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.268 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.268 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.268 21:03:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.269 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.269 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # continue 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.270 21:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.270 21:03:33 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:39.270 21:03:33 -- setup/common.sh@33 -- # echo 2048 00:02:39.270 21:03:33 -- setup/common.sh@33 -- # return 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:39.270 21:03:33 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:39.270 21:03:33 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:39.270 21:03:33 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:39.270 21:03:33 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:39.270 21:03:33 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:39.270 21:03:33 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:39.270 21:03:33 -- setup/hugepages.sh@207 -- # get_nodes 00:02:39.270 21:03:33 -- setup/hugepages.sh@27 -- # local node 00:02:39.270 21:03:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.270 21:03:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:39.270 21:03:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.270 21:03:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:39.270 21:03:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.270 21:03:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.270 21:03:33 -- setup/hugepages.sh@208 -- # clear_hp 00:02:39.270 21:03:33 -- setup/hugepages.sh@37 -- # local node hp 00:02:39.270 21:03:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.270 21:03:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.270 21:03:33 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.270 21:03:33 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:39.270 21:03:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.270 21:03:33 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:39.270 21:03:33 -- setup/hugepages.sh@41 -- # echo 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:39.270 21:03:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:39.270 21:03:33 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:39.270 21:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.270 21:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.270 21:03:33 -- common/autotest_common.sh@10 -- # set +x 00:02:39.270 ************************************ 00:02:39.270 START TEST default_setup 00:02:39.270 ************************************ 00:02:39.270 21:03:33 -- common/autotest_common.sh@1111 -- # default_setup 00:02:39.270 21:03:33 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:39.270 21:03:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:39.270 21:03:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:39.270 21:03:33 -- setup/hugepages.sh@51 -- # shift 00:02:39.270 21:03:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:39.271 21:03:33 -- setup/hugepages.sh@52 -- # local node_ids 00:02:39.271 21:03:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.271 21:03:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:39.271 21:03:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:39.271 21:03:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:39.271 21:03:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.271 21:03:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.271 21:03:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.271 21:03:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.271 21:03:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.271 21:03:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:39.271 21:03:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:39.271 21:03:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:39.271 21:03:33 -- setup/hugepages.sh@73 -- # return 0 00:02:39.271 21:03:33 -- setup/hugepages.sh@137 -- # setup output 00:02:39.271 21:03:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.271 21:03:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:42.577 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:42.577 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:42.577 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:43.149 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:02:43.411 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:02:43.411 21:03:37 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:43.411 21:03:37 -- setup/hugepages.sh@89 -- # local node 00:02:43.411 21:03:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.411 21:03:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.411 21:03:37 -- setup/hugepages.sh@92 -- # local surp 00:02:43.411 21:03:37 -- setup/hugepages.sh@93 -- # local resv 00:02:43.411 21:03:37 -- setup/hugepages.sh@94 -- # local anon 00:02:43.411 21:03:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.411 21:03:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.411 21:03:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.411 21:03:37 -- setup/common.sh@18 -- # local node= 00:02:43.411 21:03:37 -- setup/common.sh@19 -- # local var val 00:02:43.411 21:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.411 21:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.411 21:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.411 21:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.411 21:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.411 21:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106608220 kB' 'MemAvailable: 111321248 kB' 'Buffers: 2800 kB' 'Cached: 13375796 kB' 'SwapCached: 0 kB' 'Active: 9433052 kB' 'Inactive: 4601764 kB' 'Active(anon): 8861692 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665552 kB' 'Mapped: 192204 kB' 'Shmem: 8205472 kB' 'KReclaimable: 581248 kB' 'Slab: 1294488 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 713240 kB' 'KernelStack: 25504 kB' 'PageTables: 11904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10559444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230952 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.411 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.411 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.675 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.675 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.676 21:03:37 -- setup/common.sh@33 -- # echo 0 00:02:43.676 21:03:37 -- setup/common.sh@33 -- # return 0 00:02:43.676 21:03:37 -- setup/hugepages.sh@97 -- # anon=0 00:02:43.676 21:03:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.676 21:03:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.676 21:03:37 -- setup/common.sh@18 -- # local node= 00:02:43.676 21:03:37 -- setup/common.sh@19 -- # local var val 00:02:43.676 21:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.676 21:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.676 21:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.676 21:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.676 21:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.676 21:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106607280 kB' 'MemAvailable: 111320308 kB' 'Buffers: 2800 kB' 'Cached: 13375796 kB' 'SwapCached: 0 kB' 'Active: 9434460 kB' 'Inactive: 4601764 kB' 'Active(anon): 8863100 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666396 kB' 'Mapped: 192212 kB' 'Shmem: 8205472 kB' 'KReclaimable: 581248 kB' 'Slab: 1294488 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 713240 kB' 'KernelStack: 25456 kB' 'PageTables: 11912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10559456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230888 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.676 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.676 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.677 21:03:37 -- setup/common.sh@33 -- # echo 0 00:02:43.677 21:03:37 -- setup/common.sh@33 -- # return 0 00:02:43.677 21:03:37 -- setup/hugepages.sh@99 -- # surp=0 00:02:43.677 21:03:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.677 21:03:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.677 21:03:37 -- setup/common.sh@18 -- # local node= 00:02:43.677 21:03:37 -- setup/common.sh@19 -- # local var val 00:02:43.677 21:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.677 21:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.677 21:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.677 21:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.677 21:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.677 21:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106611588 kB' 'MemAvailable: 111324616 kB' 'Buffers: 2800 kB' 'Cached: 13375804 kB' 'SwapCached: 0 kB' 'Active: 9432308 kB' 'Inactive: 4601764 kB' 'Active(anon): 8860948 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664740 kB' 'Mapped: 192116 kB' 'Shmem: 8205480 kB' 'KReclaimable: 581248 kB' 'Slab: 1294492 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 713244 kB' 'KernelStack: 25296 kB' 'PageTables: 11100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10559472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230872 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.677 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.677 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.678 21:03:37 -- setup/common.sh@33 -- # echo 0 00:02:43.678 21:03:37 -- setup/common.sh@33 -- # return 0 00:02:43.678 21:03:37 -- setup/hugepages.sh@100 -- # resv=0 00:02:43.678 21:03:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.678 nr_hugepages=1024 00:02:43.678 21:03:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.678 resv_hugepages=0 00:02:43.678 21:03:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.678 surplus_hugepages=0 00:02:43.678 21:03:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.678 anon_hugepages=0 00:02:43.678 21:03:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.678 21:03:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.678 21:03:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.678 21:03:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.678 21:03:37 -- setup/common.sh@18 -- # local node= 00:02:43.678 21:03:37 -- setup/common.sh@19 -- # local var val 00:02:43.678 21:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.678 21:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.678 21:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.678 21:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.678 21:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.678 21:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106610620 kB' 'MemAvailable: 111323648 kB' 'Buffers: 2800 kB' 'Cached: 13375824 kB' 'SwapCached: 0 kB' 'Active: 9431992 kB' 'Inactive: 4601764 kB' 'Active(anon): 8860632 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664332 kB' 'Mapped: 192140 kB' 'Shmem: 8205500 kB' 'KReclaimable: 581248 kB' 'Slab: 1294492 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 713244 kB' 'KernelStack: 25088 kB' 'PageTables: 10960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10557960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230824 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.678 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.678 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.679 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.679 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.679 21:03:37 -- setup/common.sh@33 -- # echo 1024 00:02:43.679 21:03:37 -- setup/common.sh@33 -- # return 0 00:02:43.679 21:03:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.679 21:03:37 -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.679 21:03:37 -- setup/hugepages.sh@27 -- # local node 00:02:43.679 21:03:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.679 21:03:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.679 21:03:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.679 21:03:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.679 21:03:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.679 21:03:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.679 21:03:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.679 21:03:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.679 21:03:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.679 21:03:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.679 21:03:37 -- setup/common.sh@18 -- # local node=0 00:02:43.679 21:03:37 -- setup/common.sh@19 -- # local var val 00:02:43.679 21:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.679 21:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.679 21:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.679 21:03:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.680 21:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.680 21:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51483252 kB' 'MemUsed: 14272728 kB' 'SwapCached: 0 kB' 'Active: 6504780 kB' 'Inactive: 3450444 kB' 'Active(anon): 6096604 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9532824 kB' 'Mapped: 113512 kB' 'AnonPages: 431504 kB' 'Shmem: 5674204 kB' 'KernelStack: 12424 kB' 'PageTables: 6748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 677140 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 413156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # continue 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.680 21:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.680 21:03:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.680 21:03:37 -- setup/common.sh@33 -- # echo 0 00:02:43.680 21:03:37 -- setup/common.sh@33 -- # return 0 00:02:43.680 21:03:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.680 21:03:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.680 21:03:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.680 21:03:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.680 21:03:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:43.680 node0=1024 expecting 1024 00:02:43.680 21:03:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:43.680 00:02:43.680 real 0m4.319s 00:02:43.680 user 0m1.061s 00:02:43.680 sys 0m1.997s 00:02:43.680 21:03:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.680 21:03:37 -- common/autotest_common.sh@10 -- # set +x 00:02:43.680 ************************************ 00:02:43.680 END TEST default_setup 00:02:43.680 ************************************ 00:02:43.680 21:03:37 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:43.680 21:03:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.680 21:03:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.680 21:03:37 -- common/autotest_common.sh@10 -- # set +x 00:02:43.680 ************************************ 00:02:43.680 START TEST per_node_1G_alloc 00:02:43.680 ************************************ 00:02:43.680 21:03:37 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:43.680 21:03:37 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:43.680 21:03:37 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:43.680 21:03:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:43.680 21:03:37 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:43.680 21:03:37 -- setup/hugepages.sh@51 -- # shift 00:02:43.680 21:03:37 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:43.680 21:03:37 -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.680 21:03:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.680 21:03:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:43.680 21:03:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:43.680 21:03:37 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:43.680 21:03:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.680 21:03:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:43.680 21:03:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.680 21:03:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.680 21:03:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.680 21:03:37 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:43.680 21:03:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.681 21:03:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:43.681 21:03:37 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.681 21:03:37 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:43.681 21:03:37 -- setup/hugepages.sh@73 -- # return 0 00:02:43.681 21:03:37 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:43.681 21:03:37 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:43.681 21:03:37 -- setup/hugepages.sh@146 -- # setup output 00:02:43.681 21:03:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.681 21:03:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:46.227 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:46.227 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.227 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.227 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:46.491 21:03:40 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:46.491 21:03:40 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:46.491 21:03:40 -- setup/hugepages.sh@89 -- # local node 00:02:46.491 21:03:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.491 21:03:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.491 21:03:40 -- setup/hugepages.sh@92 -- # local surp 00:02:46.491 21:03:40 -- setup/hugepages.sh@93 -- # local resv 00:02:46.491 21:03:40 -- setup/hugepages.sh@94 -- # local anon 00:02:46.491 21:03:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.491 21:03:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.491 21:03:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.491 21:03:40 -- setup/common.sh@18 -- # local node= 00:02:46.491 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.491 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.491 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.491 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.491 21:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.491 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.491 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106581948 kB' 'MemAvailable: 111294976 kB' 'Buffers: 2800 kB' 'Cached: 13375924 kB' 'SwapCached: 0 kB' 'Active: 9435848 kB' 'Inactive: 4601764 kB' 'Active(anon): 8864488 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667776 kB' 'Mapped: 192088 kB' 'Shmem: 8205600 kB' 'KReclaimable: 581248 kB' 'Slab: 1295288 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 714040 kB' 'KernelStack: 25184 kB' 'PageTables: 11228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10558700 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230856 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.491 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.491 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.492 21:03:40 -- setup/common.sh@33 -- # echo 0 00:02:46.492 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.492 21:03:40 -- setup/hugepages.sh@97 -- # anon=0 00:02:46.492 21:03:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.492 21:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.492 21:03:40 -- setup/common.sh@18 -- # local node= 00:02:46.492 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.492 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.492 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.492 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.492 21:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.492 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.492 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106583160 kB' 'MemAvailable: 111296188 kB' 'Buffers: 2800 kB' 'Cached: 13375924 kB' 'SwapCached: 0 kB' 'Active: 9436276 kB' 'Inactive: 4601764 kB' 'Active(anon): 8864916 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668180 kB' 'Mapped: 192096 kB' 'Shmem: 8205600 kB' 'KReclaimable: 581248 kB' 'Slab: 1295288 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 714040 kB' 'KernelStack: 25152 kB' 'PageTables: 11144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10558712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230760 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.492 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.492 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.493 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.493 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.494 21:03:40 -- setup/common.sh@33 -- # echo 0 00:02:46.494 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.494 21:03:40 -- setup/hugepages.sh@99 -- # surp=0 00:02:46.494 21:03:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.494 21:03:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.494 21:03:40 -- setup/common.sh@18 -- # local node= 00:02:46.494 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.494 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.494 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.494 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.494 21:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.494 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.494 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106586552 kB' 'MemAvailable: 111299580 kB' 'Buffers: 2800 kB' 'Cached: 13375936 kB' 'SwapCached: 0 kB' 'Active: 9435196 kB' 'Inactive: 4601764 kB' 'Active(anon): 8863836 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667520 kB' 'Mapped: 192088 kB' 'Shmem: 8205612 kB' 'KReclaimable: 581248 kB' 'Slab: 1295256 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 714008 kB' 'KernelStack: 25136 kB' 'PageTables: 11056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10558724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230808 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.494 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.494 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.495 21:03:40 -- setup/common.sh@33 -- # echo 0 00:02:46.495 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.495 21:03:40 -- setup/hugepages.sh@100 -- # resv=0 00:02:46.495 21:03:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.495 nr_hugepages=1024 00:02:46.495 21:03:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.495 resv_hugepages=0 00:02:46.495 21:03:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.495 surplus_hugepages=0 00:02:46.495 21:03:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.495 anon_hugepages=0 00:02:46.495 21:03:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.495 21:03:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.495 21:03:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.495 21:03:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.495 21:03:40 -- setup/common.sh@18 -- # local node= 00:02:46.495 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.495 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.495 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.495 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.495 21:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.495 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.495 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106586472 kB' 'MemAvailable: 111299500 kB' 'Buffers: 2800 kB' 'Cached: 13375952 kB' 'SwapCached: 0 kB' 'Active: 9435512 kB' 'Inactive: 4601764 kB' 'Active(anon): 8864152 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667824 kB' 'Mapped: 192148 kB' 'Shmem: 8205628 kB' 'KReclaimable: 581248 kB' 'Slab: 1295320 kB' 'SReclaimable: 581248 kB' 'SUnreclaim: 714072 kB' 'KernelStack: 25328 kB' 'PageTables: 11680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10560264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230920 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.495 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.495 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.496 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.496 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.497 21:03:40 -- setup/common.sh@33 -- # echo 1024 00:02:46.497 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.497 21:03:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.497 21:03:40 -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.497 21:03:40 -- setup/hugepages.sh@27 -- # local node 00:02:46.497 21:03:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.497 21:03:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.497 21:03:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.497 21:03:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.497 21:03:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.497 21:03:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.497 21:03:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.497 21:03:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.497 21:03:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.497 21:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.497 21:03:40 -- setup/common.sh@18 -- # local node=0 00:02:46.497 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.497 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.497 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.497 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.497 21:03:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.497 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.497 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52532288 kB' 'MemUsed: 13223692 kB' 'SwapCached: 0 kB' 'Active: 6506396 kB' 'Inactive: 3450444 kB' 'Active(anon): 6098220 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9532932 kB' 'Mapped: 113528 kB' 'AnonPages: 433056 kB' 'Shmem: 5674312 kB' 'KernelStack: 12440 kB' 'PageTables: 6792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263984 kB' 'Slab: 677932 kB' 'SReclaimable: 263984 kB' 'SUnreclaim: 413948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.497 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.497 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@33 -- # echo 0 00:02:46.498 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.498 21:03:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.498 21:03:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.498 21:03:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.498 21:03:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.498 21:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.498 21:03:40 -- setup/common.sh@18 -- # local node=1 00:02:46.498 21:03:40 -- setup/common.sh@19 -- # local var val 00:02:46.498 21:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.498 21:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.498 21:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.498 21:03:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.498 21:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.498 21:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 54058148 kB' 'MemUsed: 6623860 kB' 'SwapCached: 0 kB' 'Active: 2929072 kB' 'Inactive: 1151320 kB' 'Active(anon): 2765888 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1151320 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3845832 kB' 'Mapped: 78620 kB' 'AnonPages: 234676 kB' 'Shmem: 2531328 kB' 'KernelStack: 12904 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317264 kB' 'Slab: 617324 kB' 'SReclaimable: 317264 kB' 'SUnreclaim: 300060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.498 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.498 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # continue 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.499 21:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.499 21:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.499 21:03:40 -- setup/common.sh@33 -- # echo 0 00:02:46.499 21:03:40 -- setup/common.sh@33 -- # return 0 00:02:46.499 21:03:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.499 21:03:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.499 21:03:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.499 21:03:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.499 21:03:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:46.499 node0=512 expecting 512 00:02:46.499 21:03:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.499 21:03:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.499 21:03:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.499 21:03:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:46.499 node1=512 expecting 512 00:02:46.499 21:03:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:46.499 00:02:46.499 real 0m2.789s 00:02:46.499 user 0m0.901s 00:02:46.499 sys 0m1.641s 00:02:46.499 21:03:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:46.499 21:03:40 -- common/autotest_common.sh@10 -- # set +x 00:02:46.499 ************************************ 00:02:46.499 END TEST per_node_1G_alloc 00:02:46.499 ************************************ 00:02:46.499 21:03:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:46.499 21:03:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.499 21:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.499 21:03:40 -- common/autotest_common.sh@10 -- # set +x 00:02:46.760 ************************************ 00:02:46.760 START TEST even_2G_alloc 00:02:46.760 ************************************ 00:02:46.760 21:03:40 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:46.760 21:03:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:46.760 21:03:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.760 21:03:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.760 21:03:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:46.760 21:03:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:46.760 21:03:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.760 21:03:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.760 21:03:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.760 21:03:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.760 21:03:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.760 21:03:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:46.760 21:03:40 -- setup/hugepages.sh@83 -- # : 512 00:02:46.760 21:03:40 -- setup/hugepages.sh@84 -- # : 1 00:02:46.760 21:03:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:46.760 21:03:40 -- setup/hugepages.sh@83 -- # : 0 00:02:46.760 21:03:40 -- setup/hugepages.sh@84 -- # : 0 00:02:46.760 21:03:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:46.760 21:03:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:46.760 21:03:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:46.760 21:03:40 -- setup/hugepages.sh@153 -- # setup output 00:02:46.760 21:03:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.760 21:03:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:49.373 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:49.373 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:49.373 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:49.373 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:49.638 21:03:43 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:49.638 21:03:43 -- setup/hugepages.sh@89 -- # local node 00:02:49.638 21:03:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.638 21:03:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.638 21:03:43 -- setup/hugepages.sh@92 -- # local surp 00:02:49.638 21:03:43 -- setup/hugepages.sh@93 -- # local resv 00:02:49.638 21:03:43 -- setup/hugepages.sh@94 -- # local anon 00:02:49.638 21:03:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.638 21:03:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.638 21:03:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.638 21:03:43 -- setup/common.sh@18 -- # local node= 00:02:49.638 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.638 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.638 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.638 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.638 21:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.638 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.638 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106584540 kB' 'MemAvailable: 111297496 kB' 'Buffers: 2800 kB' 'Cached: 13376044 kB' 'SwapCached: 0 kB' 'Active: 9422996 kB' 'Inactive: 4601764 kB' 'Active(anon): 8851636 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655040 kB' 'Mapped: 190984 kB' 'Shmem: 8205720 kB' 'KReclaimable: 581176 kB' 'Slab: 1294248 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 713072 kB' 'KernelStack: 25248 kB' 'PageTables: 10932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10498280 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230776 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.638 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.638 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.639 21:03:43 -- setup/common.sh@33 -- # echo 0 00:02:49.639 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.639 21:03:43 -- setup/hugepages.sh@97 -- # anon=0 00:02:49.639 21:03:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.639 21:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.639 21:03:43 -- setup/common.sh@18 -- # local node= 00:02:49.639 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.639 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.639 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.639 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.639 21:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.639 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.639 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106584472 kB' 'MemAvailable: 111297428 kB' 'Buffers: 2800 kB' 'Cached: 13376044 kB' 'SwapCached: 0 kB' 'Active: 9424492 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853132 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656592 kB' 'Mapped: 190984 kB' 'Shmem: 8205720 kB' 'KReclaimable: 581176 kB' 'Slab: 1294340 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 713164 kB' 'KernelStack: 25296 kB' 'PageTables: 11452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10498296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230808 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.639 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.639 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.640 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.640 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.641 21:03:43 -- setup/common.sh@33 -- # echo 0 00:02:49.641 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.641 21:03:43 -- setup/hugepages.sh@99 -- # surp=0 00:02:49.641 21:03:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.641 21:03:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.641 21:03:43 -- setup/common.sh@18 -- # local node= 00:02:49.641 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.641 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.641 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.641 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.641 21:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.641 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.641 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106585404 kB' 'MemAvailable: 111298360 kB' 'Buffers: 2800 kB' 'Cached: 13376056 kB' 'SwapCached: 0 kB' 'Active: 9423828 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852468 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655916 kB' 'Mapped: 190896 kB' 'Shmem: 8205732 kB' 'KReclaimable: 581176 kB' 'Slab: 1294344 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 713168 kB' 'KernelStack: 25312 kB' 'PageTables: 11180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10498312 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230792 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.641 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.641 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.642 21:03:43 -- setup/common.sh@33 -- # echo 0 00:02:49.642 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.642 21:03:43 -- setup/hugepages.sh@100 -- # resv=0 00:02:49.642 21:03:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.642 nr_hugepages=1024 00:02:49.642 21:03:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.642 resv_hugepages=0 00:02:49.642 21:03:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.642 surplus_hugepages=0 00:02:49.642 21:03:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.642 anon_hugepages=0 00:02:49.642 21:03:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.642 21:03:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.642 21:03:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.642 21:03:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.642 21:03:43 -- setup/common.sh@18 -- # local node= 00:02:49.642 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.642 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.642 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.642 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.642 21:03:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.642 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.642 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106587412 kB' 'MemAvailable: 111300368 kB' 'Buffers: 2800 kB' 'Cached: 13376088 kB' 'SwapCached: 0 kB' 'Active: 9424160 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852800 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656140 kB' 'Mapped: 190896 kB' 'Shmem: 8205764 kB' 'KReclaimable: 581176 kB' 'Slab: 1294344 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 713168 kB' 'KernelStack: 25312 kB' 'PageTables: 11384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10498696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230808 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.642 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.642 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.643 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.643 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.644 21:03:43 -- setup/common.sh@33 -- # echo 1024 00:02:49.644 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.644 21:03:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.644 21:03:43 -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.644 21:03:43 -- setup/hugepages.sh@27 -- # local node 00:02:49.644 21:03:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.644 21:03:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.644 21:03:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.644 21:03:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.644 21:03:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.644 21:03:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.644 21:03:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.644 21:03:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.644 21:03:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.644 21:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.644 21:03:43 -- setup/common.sh@18 -- # local node=0 00:02:49.644 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.644 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.644 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.644 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.644 21:03:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.644 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.644 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52542340 kB' 'MemUsed: 13213640 kB' 'SwapCached: 0 kB' 'Active: 6496820 kB' 'Inactive: 3450444 kB' 'Active(anon): 6088644 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9533028 kB' 'Mapped: 112276 kB' 'AnonPages: 423328 kB' 'Shmem: 5674408 kB' 'KernelStack: 12328 kB' 'PageTables: 6356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263912 kB' 'Slab: 677000 kB' 'SReclaimable: 263912 kB' 'SUnreclaim: 413088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.644 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.644 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.645 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.645 21:03:43 -- setup/common.sh@33 -- # echo 0 00:02:49.645 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.645 21:03:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.645 21:03:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.645 21:03:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.645 21:03:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.645 21:03:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.645 21:03:43 -- setup/common.sh@18 -- # local node=1 00:02:49.645 21:03:43 -- setup/common.sh@19 -- # local var val 00:02:49.645 21:03:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.645 21:03:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.645 21:03:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.645 21:03:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.645 21:03:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.645 21:03:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.645 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 54044484 kB' 'MemUsed: 6637524 kB' 'SwapCached: 0 kB' 'Active: 2927132 kB' 'Inactive: 1151320 kB' 'Active(anon): 2763948 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1151320 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3845876 kB' 'Mapped: 78612 kB' 'AnonPages: 232676 kB' 'Shmem: 2531372 kB' 'KernelStack: 12968 kB' 'PageTables: 4896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317264 kB' 'Slab: 617344 kB' 'SReclaimable: 317264 kB' 'SUnreclaim: 300080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.646 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.646 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # continue 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.647 21:03:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.647 21:03:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.647 21:03:43 -- setup/common.sh@33 -- # echo 0 00:02:49.647 21:03:43 -- setup/common.sh@33 -- # return 0 00:02:49.647 21:03:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.647 21:03:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.647 21:03:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.647 21:03:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.647 21:03:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.647 node0=512 expecting 512 00:02:49.647 21:03:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.647 21:03:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.647 21:03:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.647 21:03:43 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.647 node1=512 expecting 512 00:02:49.647 21:03:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.647 00:02:49.647 real 0m3.056s 00:02:49.647 user 0m1.008s 00:02:49.647 sys 0m1.818s 00:02:49.647 21:03:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:49.647 21:03:43 -- common/autotest_common.sh@10 -- # set +x 00:02:49.647 ************************************ 00:02:49.647 END TEST even_2G_alloc 00:02:49.647 ************************************ 00:02:49.910 21:03:43 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:49.910 21:03:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.910 21:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.910 21:03:43 -- common/autotest_common.sh@10 -- # set +x 00:02:49.910 ************************************ 00:02:49.910 START TEST odd_alloc 00:02:49.910 ************************************ 00:02:49.910 21:03:44 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:49.910 21:03:44 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:49.910 21:03:44 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:49.910 21:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:49.910 21:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:49.910 21:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:49.910 21:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.910 21:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:49.910 21:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.910 21:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.910 21:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.910 21:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.910 21:03:44 -- setup/hugepages.sh@83 -- # : 513 00:02:49.910 21:03:44 -- setup/hugepages.sh@84 -- # : 1 00:02:49.910 21:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:49.910 21:03:44 -- setup/hugepages.sh@83 -- # : 0 00:02:49.910 21:03:44 -- setup/hugepages.sh@84 -- # : 0 00:02:49.910 21:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.910 21:03:44 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:49.910 21:03:44 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:49.910 21:03:44 -- setup/hugepages.sh@160 -- # setup output 00:02:49.910 21:03:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.910 21:03:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:52.463 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:52.463 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:52.463 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:52.463 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:52.463 21:03:46 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:52.463 21:03:46 -- setup/hugepages.sh@89 -- # local node 00:02:52.463 21:03:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.463 21:03:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.463 21:03:46 -- setup/hugepages.sh@92 -- # local surp 00:02:52.463 21:03:46 -- setup/hugepages.sh@93 -- # local resv 00:02:52.463 21:03:46 -- setup/hugepages.sh@94 -- # local anon 00:02:52.463 21:03:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.463 21:03:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.463 21:03:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.463 21:03:46 -- setup/common.sh@18 -- # local node= 00:02:52.463 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.463 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.463 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.463 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.463 21:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.463 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.463 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106606228 kB' 'MemAvailable: 111319184 kB' 'Buffers: 2800 kB' 'Cached: 13376176 kB' 'SwapCached: 0 kB' 'Active: 9423900 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852540 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655492 kB' 'Mapped: 191012 kB' 'Shmem: 8205852 kB' 'KReclaimable: 581176 kB' 'Slab: 1293928 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712752 kB' 'KernelStack: 25120 kB' 'PageTables: 10320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10497644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230680 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.463 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.464 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.464 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.465 21:03:46 -- setup/common.sh@33 -- # echo 0 00:02:52.465 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.465 21:03:46 -- setup/hugepages.sh@97 -- # anon=0 00:02:52.465 21:03:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.465 21:03:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.465 21:03:46 -- setup/common.sh@18 -- # local node= 00:02:52.465 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.465 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.465 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.465 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.465 21:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.465 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.465 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106608472 kB' 'MemAvailable: 111321428 kB' 'Buffers: 2800 kB' 'Cached: 13376176 kB' 'SwapCached: 0 kB' 'Active: 9424716 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853356 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656336 kB' 'Mapped: 191012 kB' 'Shmem: 8205852 kB' 'KReclaimable: 581176 kB' 'Slab: 1293904 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712728 kB' 'KernelStack: 25264 kB' 'PageTables: 10348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10497656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230696 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.465 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.465 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.466 21:03:46 -- setup/common.sh@33 -- # echo 0 00:02:52.466 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.466 21:03:46 -- setup/hugepages.sh@99 -- # surp=0 00:02:52.466 21:03:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.466 21:03:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.466 21:03:46 -- setup/common.sh@18 -- # local node= 00:02:52.466 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.466 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.466 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.466 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.466 21:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.466 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.466 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106609508 kB' 'MemAvailable: 111322464 kB' 'Buffers: 2800 kB' 'Cached: 13376188 kB' 'SwapCached: 0 kB' 'Active: 9425408 kB' 'Inactive: 4601764 kB' 'Active(anon): 8854048 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657080 kB' 'Mapped: 191004 kB' 'Shmem: 8205864 kB' 'KReclaimable: 581176 kB' 'Slab: 1293904 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712728 kB' 'KernelStack: 25280 kB' 'PageTables: 10836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10501900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230792 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.466 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.466 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.467 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.467 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.468 21:03:46 -- setup/common.sh@33 -- # echo 0 00:02:52.468 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.468 21:03:46 -- setup/hugepages.sh@100 -- # resv=0 00:02:52.468 21:03:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:52.468 nr_hugepages=1025 00:02:52.468 21:03:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.468 resv_hugepages=0 00:02:52.468 21:03:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.468 surplus_hugepages=0 00:02:52.468 21:03:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.468 anon_hugepages=0 00:02:52.468 21:03:46 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.468 21:03:46 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:52.468 21:03:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.468 21:03:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.468 21:03:46 -- setup/common.sh@18 -- # local node= 00:02:52.468 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.468 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.468 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.468 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.468 21:03:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.468 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.468 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106605132 kB' 'MemAvailable: 111318088 kB' 'Buffers: 2800 kB' 'Cached: 13376192 kB' 'SwapCached: 0 kB' 'Active: 9428624 kB' 'Inactive: 4601764 kB' 'Active(anon): 8857264 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660736 kB' 'Mapped: 191432 kB' 'Shmem: 8205868 kB' 'KReclaimable: 581176 kB' 'Slab: 1293896 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712720 kB' 'KernelStack: 25472 kB' 'PageTables: 11160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70557996 kB' 'Committed_AS: 10503556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230824 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.468 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.468 21:03:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.469 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.469 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.734 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.734 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.734 21:03:46 -- setup/common.sh@33 -- # echo 1025 00:02:52.734 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.734 21:03:46 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.734 21:03:46 -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.734 21:03:46 -- setup/hugepages.sh@27 -- # local node 00:02:52.734 21:03:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.734 21:03:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.734 21:03:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.734 21:03:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:52.734 21:03:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.734 21:03:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.734 21:03:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.734 21:03:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.734 21:03:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.734 21:03:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.734 21:03:46 -- setup/common.sh@18 -- # local node=0 00:02:52.734 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.734 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.735 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.735 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.735 21:03:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.735 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.735 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52532332 kB' 'MemUsed: 13223648 kB' 'SwapCached: 0 kB' 'Active: 6496688 kB' 'Inactive: 3450444 kB' 'Active(anon): 6088512 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9533104 kB' 'Mapped: 112308 kB' 'AnonPages: 423112 kB' 'Shmem: 5674484 kB' 'KernelStack: 12280 kB' 'PageTables: 5908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263912 kB' 'Slab: 676552 kB' 'SReclaimable: 263912 kB' 'SUnreclaim: 412640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.735 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.735 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.735 21:03:46 -- setup/common.sh@33 -- # echo 0 00:02:52.735 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.736 21:03:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.736 21:03:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.736 21:03:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.736 21:03:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.736 21:03:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.736 21:03:46 -- setup/common.sh@18 -- # local node=1 00:02:52.736 21:03:46 -- setup/common.sh@19 -- # local var val 00:02:52.736 21:03:46 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.736 21:03:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.736 21:03:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.736 21:03:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.736 21:03:46 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.736 21:03:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 54067352 kB' 'MemUsed: 6614656 kB' 'SwapCached: 0 kB' 'Active: 2934836 kB' 'Inactive: 1151320 kB' 'Active(anon): 2771652 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1151320 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3845912 kB' 'Mapped: 79336 kB' 'AnonPages: 240496 kB' 'Shmem: 2531408 kB' 'KernelStack: 13272 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317264 kB' 'Slab: 617344 kB' 'SReclaimable: 317264 kB' 'SUnreclaim: 300080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.736 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.736 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.737 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.737 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.737 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.737 21:03:46 -- setup/common.sh@32 -- # continue 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.737 21:03:46 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.737 21:03:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.737 21:03:46 -- setup/common.sh@33 -- # echo 0 00:02:52.737 21:03:46 -- setup/common.sh@33 -- # return 0 00:02:52.737 21:03:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.737 21:03:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.737 21:03:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:52.737 node0=512 expecting 513 00:02:52.737 21:03:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.737 21:03:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.737 21:03:46 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:52.737 node1=513 expecting 512 00:02:52.737 21:03:46 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:52.737 00:02:52.737 real 0m2.759s 00:02:52.737 user 0m0.880s 00:02:52.737 sys 0m1.631s 00:02:52.737 21:03:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:52.737 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:02:52.737 ************************************ 00:02:52.737 END TEST odd_alloc 00:02:52.737 ************************************ 00:02:52.737 21:03:46 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:52.737 21:03:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:52.737 21:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:52.737 21:03:46 -- common/autotest_common.sh@10 -- # set +x 00:02:52.737 ************************************ 00:02:52.737 START TEST custom_alloc 00:02:52.737 ************************************ 00:02:52.737 21:03:46 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:52.737 21:03:46 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:52.737 21:03:46 -- setup/hugepages.sh@169 -- # local node 00:02:52.737 21:03:46 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:52.737 21:03:46 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:52.737 21:03:46 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:52.737 21:03:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:52.737 21:03:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:52.737 21:03:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.737 21:03:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:52.737 21:03:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.737 21:03:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.737 21:03:46 -- setup/hugepages.sh@83 -- # : 256 00:02:52.737 21:03:46 -- setup/hugepages.sh@84 -- # : 1 00:02:52.737 21:03:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.737 21:03:46 -- setup/hugepages.sh@83 -- # : 0 00:02:52.737 21:03:46 -- setup/hugepages.sh@84 -- # : 0 00:02:52.737 21:03:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:52.737 21:03:46 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:52.737 21:03:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.737 21:03:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.737 21:03:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.737 21:03:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.737 21:03:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.737 21:03:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.737 21:03:46 -- setup/hugepages.sh@78 -- # return 0 00:02:52.737 21:03:46 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:52.737 21:03:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.737 21:03:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.737 21:03:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.737 21:03:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.737 21:03:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.737 21:03:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.737 21:03:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:52.737 21:03:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.737 21:03:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.737 21:03:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:52.737 21:03:46 -- setup/hugepages.sh@78 -- # return 0 00:02:52.737 21:03:46 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:52.737 21:03:46 -- setup/hugepages.sh@187 -- # setup output 00:02:52.737 21:03:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.737 21:03:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:55.289 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:55.289 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:55.289 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:55.289 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:55.553 21:03:49 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:55.553 21:03:49 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:55.553 21:03:49 -- setup/hugepages.sh@89 -- # local node 00:02:55.553 21:03:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.553 21:03:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.553 21:03:49 -- setup/hugepages.sh@92 -- # local surp 00:02:55.553 21:03:49 -- setup/hugepages.sh@93 -- # local resv 00:02:55.553 21:03:49 -- setup/hugepages.sh@94 -- # local anon 00:02:55.553 21:03:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.553 21:03:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.553 21:03:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.553 21:03:49 -- setup/common.sh@18 -- # local node= 00:02:55.553 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.553 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.553 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.553 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.553 21:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.553 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.553 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.553 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.553 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105540476 kB' 'MemAvailable: 110253432 kB' 'Buffers: 2800 kB' 'Cached: 13376316 kB' 'SwapCached: 0 kB' 'Active: 9424944 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853584 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656332 kB' 'Mapped: 190960 kB' 'Shmem: 8205992 kB' 'KReclaimable: 581176 kB' 'Slab: 1293228 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712052 kB' 'KernelStack: 25152 kB' 'PageTables: 10300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10496920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230616 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.553 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.553 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.553 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.553 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.554 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.554 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.554 21:03:49 -- setup/common.sh@33 -- # echo 0 00:02:55.554 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.554 21:03:49 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.554 21:03:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.554 21:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.554 21:03:49 -- setup/common.sh@18 -- # local node= 00:02:55.554 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.554 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.554 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.554 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.555 21:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.555 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.555 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105543764 kB' 'MemAvailable: 110256720 kB' 'Buffers: 2800 kB' 'Cached: 13376316 kB' 'SwapCached: 0 kB' 'Active: 9425284 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853924 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657264 kB' 'Mapped: 190956 kB' 'Shmem: 8205992 kB' 'KReclaimable: 581176 kB' 'Slab: 1293220 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712044 kB' 'KernelStack: 25200 kB' 'PageTables: 10428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10496932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230600 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.555 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.555 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.556 21:03:49 -- setup/common.sh@33 -- # echo 0 00:02:55.556 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.556 21:03:49 -- setup/hugepages.sh@99 -- # surp=0 00:02:55.556 21:03:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.556 21:03:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.556 21:03:49 -- setup/common.sh@18 -- # local node= 00:02:55.556 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.556 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.556 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.556 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.556 21:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.556 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.556 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105543732 kB' 'MemAvailable: 110256688 kB' 'Buffers: 2800 kB' 'Cached: 13376324 kB' 'SwapCached: 0 kB' 'Active: 9425408 kB' 'Inactive: 4601764 kB' 'Active(anon): 8854048 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657360 kB' 'Mapped: 190948 kB' 'Shmem: 8206000 kB' 'KReclaimable: 581176 kB' 'Slab: 1293220 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712044 kB' 'KernelStack: 25472 kB' 'PageTables: 10876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10498472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230824 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.556 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.556 21:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.557 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.557 21:03:49 -- setup/common.sh@33 -- # echo 0 00:02:55.557 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.557 21:03:49 -- setup/hugepages.sh@100 -- # resv=0 00:02:55.557 21:03:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:55.557 nr_hugepages=1536 00:02:55.557 21:03:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.557 resv_hugepages=0 00:02:55.557 21:03:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.557 surplus_hugepages=0 00:02:55.557 21:03:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.557 anon_hugepages=0 00:02:55.557 21:03:49 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:55.557 21:03:49 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:55.557 21:03:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.557 21:03:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.557 21:03:49 -- setup/common.sh@18 -- # local node= 00:02:55.557 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.557 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.557 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.557 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.557 21:03:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.557 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.557 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.557 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 105543288 kB' 'MemAvailable: 110256244 kB' 'Buffers: 2800 kB' 'Cached: 13376344 kB' 'SwapCached: 0 kB' 'Active: 9425292 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853932 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657128 kB' 'Mapped: 190948 kB' 'Shmem: 8206020 kB' 'KReclaimable: 581176 kB' 'Slab: 1293192 kB' 'SReclaimable: 581176 kB' 'SUnreclaim: 712016 kB' 'KernelStack: 25536 kB' 'PageTables: 11324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70034732 kB' 'Committed_AS: 10498484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230856 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.558 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.558 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.559 21:03:49 -- setup/common.sh@33 -- # echo 1536 00:02:55.559 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.559 21:03:49 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:55.559 21:03:49 -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.559 21:03:49 -- setup/hugepages.sh@27 -- # local node 00:02:55.559 21:03:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.559 21:03:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:55.559 21:03:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.559 21:03:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.559 21:03:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.559 21:03:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.559 21:03:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.559 21:03:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.559 21:03:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.559 21:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.559 21:03:49 -- setup/common.sh@18 -- # local node=0 00:02:55.559 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.559 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.559 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.559 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.559 21:03:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.559 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.559 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 52531720 kB' 'MemUsed: 13224260 kB' 'SwapCached: 0 kB' 'Active: 6499092 kB' 'Inactive: 3450444 kB' 'Active(anon): 6090916 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9533204 kB' 'Mapped: 112336 kB' 'AnonPages: 425448 kB' 'Shmem: 5674584 kB' 'KernelStack: 12392 kB' 'PageTables: 6316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263912 kB' 'Slab: 675940 kB' 'SReclaimable: 263912 kB' 'SUnreclaim: 412028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.559 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.559 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@33 -- # echo 0 00:02:55.560 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.560 21:03:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.560 21:03:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.560 21:03:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.560 21:03:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:55.560 21:03:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.560 21:03:49 -- setup/common.sh@18 -- # local node=1 00:02:55.560 21:03:49 -- setup/common.sh@19 -- # local var val 00:02:55.560 21:03:49 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.560 21:03:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.560 21:03:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:55.560 21:03:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:55.560 21:03:49 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.560 21:03:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60682008 kB' 'MemFree: 53010504 kB' 'MemUsed: 7671504 kB' 'SwapCached: 0 kB' 'Active: 2927024 kB' 'Inactive: 1151320 kB' 'Active(anon): 2763840 kB' 'Inactive(anon): 0 kB' 'Active(file): 163184 kB' 'Inactive(file): 1151320 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3845940 kB' 'Mapped: 78620 kB' 'AnonPages: 232572 kB' 'Shmem: 2531436 kB' 'KernelStack: 13160 kB' 'PageTables: 4880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 317264 kB' 'Slab: 617156 kB' 'SReclaimable: 317264 kB' 'SUnreclaim: 299892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.560 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.560 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # continue 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.561 21:03:49 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.561 21:03:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.561 21:03:49 -- setup/common.sh@33 -- # echo 0 00:02:55.561 21:03:49 -- setup/common.sh@33 -- # return 0 00:02:55.561 21:03:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.561 21:03:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.561 21:03:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.561 21:03:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.561 21:03:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:55.561 node0=512 expecting 512 00:02:55.561 21:03:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.561 21:03:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.561 21:03:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.561 21:03:49 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:55.561 node1=1024 expecting 1024 00:02:55.561 21:03:49 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:55.561 00:02:55.561 real 0m2.871s 00:02:55.561 user 0m0.984s 00:02:55.561 sys 0m1.658s 00:02:55.561 21:03:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:55.561 21:03:49 -- common/autotest_common.sh@10 -- # set +x 00:02:55.561 ************************************ 00:02:55.561 END TEST custom_alloc 00:02:55.561 ************************************ 00:02:55.822 21:03:49 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:55.822 21:03:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.822 21:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.822 21:03:49 -- common/autotest_common.sh@10 -- # set +x 00:02:55.822 ************************************ 00:02:55.822 START TEST no_shrink_alloc 00:02:55.823 ************************************ 00:02:55.823 21:03:49 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:55.823 21:03:49 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:55.823 21:03:49 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:55.823 21:03:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:55.823 21:03:49 -- setup/hugepages.sh@51 -- # shift 00:02:55.823 21:03:49 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:55.823 21:03:49 -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.823 21:03:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.823 21:03:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:55.823 21:03:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:55.823 21:03:49 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:55.823 21:03:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.823 21:03:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:55.823 21:03:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.823 21:03:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.823 21:03:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.823 21:03:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:55.823 21:03:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.823 21:03:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:55.823 21:03:49 -- setup/hugepages.sh@73 -- # return 0 00:02:55.823 21:03:49 -- setup/hugepages.sh@198 -- # setup output 00:02:55.823 21:03:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.823 21:03:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:58.379 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:02:58.379 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:58.379 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:58.379 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:02:58.379 21:03:52 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:58.379 21:03:52 -- setup/hugepages.sh@89 -- # local node 00:02:58.379 21:03:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.379 21:03:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.379 21:03:52 -- setup/hugepages.sh@92 -- # local surp 00:02:58.379 21:03:52 -- setup/hugepages.sh@93 -- # local resv 00:02:58.379 21:03:52 -- setup/hugepages.sh@94 -- # local anon 00:02:58.379 21:03:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.379 21:03:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.379 21:03:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.379 21:03:52 -- setup/common.sh@18 -- # local node= 00:02:58.379 21:03:52 -- setup/common.sh@19 -- # local var val 00:02:58.379 21:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.379 21:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.379 21:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.379 21:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.379 21:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.379 21:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.379 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.379 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106604196 kB' 'MemAvailable: 111317120 kB' 'Buffers: 2800 kB' 'Cached: 13376432 kB' 'SwapCached: 0 kB' 'Active: 9422872 kB' 'Inactive: 4601764 kB' 'Active(anon): 8851512 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654580 kB' 'Mapped: 190996 kB' 'Shmem: 8206108 kB' 'KReclaimable: 581144 kB' 'Slab: 1293960 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 712816 kB' 'KernelStack: 25168 kB' 'PageTables: 10128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10496264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230680 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.380 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.380 21:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.380 21:03:52 -- setup/common.sh@33 -- # echo 0 00:02:58.380 21:03:52 -- setup/common.sh@33 -- # return 0 00:02:58.380 21:03:52 -- setup/hugepages.sh@97 -- # anon=0 00:02:58.380 21:03:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.380 21:03:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.381 21:03:52 -- setup/common.sh@18 -- # local node= 00:02:58.381 21:03:52 -- setup/common.sh@19 -- # local var val 00:02:58.381 21:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.381 21:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.381 21:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.381 21:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.381 21:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.381 21:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106606968 kB' 'MemAvailable: 111319892 kB' 'Buffers: 2800 kB' 'Cached: 13376432 kB' 'SwapCached: 0 kB' 'Active: 9423484 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852124 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655244 kB' 'Mapped: 190996 kB' 'Shmem: 8206108 kB' 'KReclaimable: 581144 kB' 'Slab: 1293960 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 712816 kB' 'KernelStack: 25152 kB' 'PageTables: 10076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10496276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230632 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.381 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.381 21:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.382 21:03:52 -- setup/common.sh@33 -- # echo 0 00:02:58.382 21:03:52 -- setup/common.sh@33 -- # return 0 00:02:58.382 21:03:52 -- setup/hugepages.sh@99 -- # surp=0 00:02:58.382 21:03:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.382 21:03:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.382 21:03:52 -- setup/common.sh@18 -- # local node= 00:02:58.382 21:03:52 -- setup/common.sh@19 -- # local var val 00:02:58.382 21:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.382 21:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.382 21:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.382 21:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.382 21:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.382 21:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106607044 kB' 'MemAvailable: 111319968 kB' 'Buffers: 2800 kB' 'Cached: 13376436 kB' 'SwapCached: 0 kB' 'Active: 9423392 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852032 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655184 kB' 'Mapped: 190964 kB' 'Shmem: 8206112 kB' 'KReclaimable: 581144 kB' 'Slab: 1293960 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 712816 kB' 'KernelStack: 25200 kB' 'PageTables: 10188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10496292 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230648 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.382 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.382 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.383 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.383 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.384 21:03:52 -- setup/common.sh@33 -- # echo 0 00:02:58.384 21:03:52 -- setup/common.sh@33 -- # return 0 00:02:58.384 21:03:52 -- setup/hugepages.sh@100 -- # resv=0 00:02:58.384 21:03:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:58.384 nr_hugepages=1024 00:02:58.384 21:03:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:58.384 resv_hugepages=0 00:02:58.384 21:03:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:58.384 surplus_hugepages=0 00:02:58.384 21:03:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:58.384 anon_hugepages=0 00:02:58.384 21:03:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.384 21:03:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:58.384 21:03:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:58.384 21:03:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:58.384 21:03:52 -- setup/common.sh@18 -- # local node= 00:02:58.384 21:03:52 -- setup/common.sh@19 -- # local var val 00:02:58.384 21:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.384 21:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.384 21:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.384 21:03:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.384 21:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.384 21:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106606568 kB' 'MemAvailable: 111319492 kB' 'Buffers: 2800 kB' 'Cached: 13376456 kB' 'SwapCached: 0 kB' 'Active: 9422692 kB' 'Inactive: 4601764 kB' 'Active(anon): 8851332 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654368 kB' 'Mapped: 190964 kB' 'Shmem: 8206132 kB' 'KReclaimable: 581144 kB' 'Slab: 1293976 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 712832 kB' 'KernelStack: 25200 kB' 'PageTables: 10112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10496308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230664 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.384 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.384 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:58.385 21:03:52 -- setup/common.sh@33 -- # echo 1024 00:02:58.385 21:03:52 -- setup/common.sh@33 -- # return 0 00:02:58.385 21:03:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:58.385 21:03:52 -- setup/hugepages.sh@112 -- # get_nodes 00:02:58.385 21:03:52 -- setup/hugepages.sh@27 -- # local node 00:02:58.385 21:03:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.385 21:03:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:58.385 21:03:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.385 21:03:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.385 21:03:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.385 21:03:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.385 21:03:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:58.385 21:03:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:58.385 21:03:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:58.385 21:03:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.385 21:03:52 -- setup/common.sh@18 -- # local node=0 00:02:58.385 21:03:52 -- setup/common.sh@19 -- # local var val 00:02:58.385 21:03:52 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.385 21:03:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.385 21:03:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:58.385 21:03:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:58.385 21:03:52 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.385 21:03:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51489232 kB' 'MemUsed: 14266748 kB' 'SwapCached: 0 kB' 'Active: 6496524 kB' 'Inactive: 3450444 kB' 'Active(anon): 6088348 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9533304 kB' 'Mapped: 112368 kB' 'AnonPages: 422796 kB' 'Shmem: 5674684 kB' 'KernelStack: 12200 kB' 'PageTables: 5688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263880 kB' 'Slab: 676800 kB' 'SReclaimable: 263880 kB' 'SUnreclaim: 412920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.385 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.385 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # continue 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.386 21:03:52 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.386 21:03:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.386 21:03:52 -- setup/common.sh@33 -- # echo 0 00:02:58.386 21:03:52 -- setup/common.sh@33 -- # return 0 00:02:58.386 21:03:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:58.386 21:03:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:58.386 21:03:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:58.386 21:03:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:58.386 21:03:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:58.386 node0=1024 expecting 1024 00:02:58.386 21:03:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:58.386 21:03:52 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:58.386 21:03:52 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:58.386 21:03:52 -- setup/hugepages.sh@202 -- # setup output 00:02:58.386 21:03:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.386 21:03:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:00.938 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:03:00.938 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:00.938 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:00.938 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:03:01.205 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:01.205 21:03:55 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:01.205 21:03:55 -- setup/hugepages.sh@89 -- # local node 00:03:01.205 21:03:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.205 21:03:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.205 21:03:55 -- setup/hugepages.sh@92 -- # local surp 00:03:01.205 21:03:55 -- setup/hugepages.sh@93 -- # local resv 00:03:01.205 21:03:55 -- setup/hugepages.sh@94 -- # local anon 00:03:01.205 21:03:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.205 21:03:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.205 21:03:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.205 21:03:55 -- setup/common.sh@18 -- # local node= 00:03:01.205 21:03:55 -- setup/common.sh@19 -- # local var val 00:03:01.205 21:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.205 21:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.205 21:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.205 21:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.205 21:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.205 21:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.205 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.205 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106619384 kB' 'MemAvailable: 111332308 kB' 'Buffers: 2800 kB' 'Cached: 13376544 kB' 'SwapCached: 0 kB' 'Active: 9424100 kB' 'Inactive: 4601764 kB' 'Active(anon): 8852740 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655740 kB' 'Mapped: 191056 kB' 'Shmem: 8206220 kB' 'KReclaimable: 581144 kB' 'Slab: 1294416 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 713272 kB' 'KernelStack: 25296 kB' 'PageTables: 10388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10497028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230808 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.206 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.206 21:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.207 21:03:55 -- setup/common.sh@33 -- # echo 0 00:03:01.207 21:03:55 -- setup/common.sh@33 -- # return 0 00:03:01.207 21:03:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:01.207 21:03:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.207 21:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.207 21:03:55 -- setup/common.sh@18 -- # local node= 00:03:01.207 21:03:55 -- setup/common.sh@19 -- # local var val 00:03:01.207 21:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.207 21:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.207 21:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.207 21:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.207 21:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.207 21:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106623920 kB' 'MemAvailable: 111336844 kB' 'Buffers: 2800 kB' 'Cached: 13376544 kB' 'SwapCached: 0 kB' 'Active: 9424608 kB' 'Inactive: 4601764 kB' 'Active(anon): 8853248 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656276 kB' 'Mapped: 191068 kB' 'Shmem: 8206220 kB' 'KReclaimable: 581144 kB' 'Slab: 1294416 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 713272 kB' 'KernelStack: 25296 kB' 'PageTables: 10360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10497040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230776 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.207 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.207 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.208 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.208 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.209 21:03:55 -- setup/common.sh@33 -- # echo 0 00:03:01.209 21:03:55 -- setup/common.sh@33 -- # return 0 00:03:01.209 21:03:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:01.209 21:03:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.209 21:03:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.209 21:03:55 -- setup/common.sh@18 -- # local node= 00:03:01.209 21:03:55 -- setup/common.sh@19 -- # local var val 00:03:01.209 21:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.209 21:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.209 21:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.209 21:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.209 21:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.209 21:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106624120 kB' 'MemAvailable: 111337044 kB' 'Buffers: 2800 kB' 'Cached: 13376548 kB' 'SwapCached: 0 kB' 'Active: 9426048 kB' 'Inactive: 4601764 kB' 'Active(anon): 8854688 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 657756 kB' 'Mapped: 191484 kB' 'Shmem: 8206224 kB' 'KReclaimable: 581144 kB' 'Slab: 1294392 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 713248 kB' 'KernelStack: 25296 kB' 'PageTables: 10324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10501212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230744 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.209 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.209 21:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.210 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.210 21:03:55 -- setup/common.sh@33 -- # echo 0 00:03:01.210 21:03:55 -- setup/common.sh@33 -- # return 0 00:03:01.210 21:03:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:01.210 21:03:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.210 nr_hugepages=1024 00:03:01.210 21:03:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.210 resv_hugepages=0 00:03:01.210 21:03:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.210 surplus_hugepages=0 00:03:01.210 21:03:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.210 anon_hugepages=0 00:03:01.210 21:03:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.210 21:03:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.210 21:03:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.210 21:03:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.210 21:03:55 -- setup/common.sh@18 -- # local node= 00:03:01.210 21:03:55 -- setup/common.sh@19 -- # local var val 00:03:01.210 21:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.210 21:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.210 21:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.210 21:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.210 21:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.210 21:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.210 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126437988 kB' 'MemFree: 106624552 kB' 'MemAvailable: 111337476 kB' 'Buffers: 2800 kB' 'Cached: 13376564 kB' 'SwapCached: 0 kB' 'Active: 9429704 kB' 'Inactive: 4601764 kB' 'Active(anon): 8858344 kB' 'Inactive(anon): 0 kB' 'Active(file): 571360 kB' 'Inactive(file): 4601764 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660768 kB' 'Mapped: 191484 kB' 'Shmem: 8206240 kB' 'KReclaimable: 581144 kB' 'Slab: 1294472 kB' 'SReclaimable: 581144 kB' 'SUnreclaim: 713328 kB' 'KernelStack: 25264 kB' 'PageTables: 10232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70559020 kB' 'Committed_AS: 10504936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230764 kB' 'VmallocChunk: 0 kB' 'Percpu: 187392 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3698752 kB' 'DirectMap2M: 27535360 kB' 'DirectMap1G: 104857600 kB' 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 21:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.212 21:03:55 -- setup/common.sh@33 -- # echo 1024 00:03:01.212 21:03:55 -- setup/common.sh@33 -- # return 0 00:03:01.212 21:03:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.212 21:03:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.212 21:03:55 -- setup/hugepages.sh@27 -- # local node 00:03:01.212 21:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.212 21:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.212 21:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.212 21:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.212 21:03:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.212 21:03:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.212 21:03:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.212 21:03:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.212 21:03:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.212 21:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.212 21:03:55 -- setup/common.sh@18 -- # local node=0 00:03:01.212 21:03:55 -- setup/common.sh@19 -- # local var val 00:03:01.212 21:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.212 21:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.212 21:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.212 21:03:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.212 21:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.212 21:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65755980 kB' 'MemFree: 51493056 kB' 'MemUsed: 14262924 kB' 'SwapCached: 0 kB' 'Active: 6496380 kB' 'Inactive: 3450444 kB' 'Active(anon): 6088204 kB' 'Inactive(anon): 0 kB' 'Active(file): 408176 kB' 'Inactive(file): 3450444 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9533372 kB' 'Mapped: 112536 kB' 'AnonPages: 422528 kB' 'Shmem: 5674752 kB' 'KernelStack: 12232 kB' 'PageTables: 5684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 263880 kB' 'Slab: 677192 kB' 'SReclaimable: 263880 kB' 'SUnreclaim: 413312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 21:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.213 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.213 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # continue 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.214 21:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.214 21:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.214 21:03:55 -- setup/common.sh@33 -- # echo 0 00:03:01.214 21:03:55 -- setup/common.sh@33 -- # return 0 00:03:01.214 21:03:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.214 21:03:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.214 21:03:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.214 21:03:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.214 21:03:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.214 node0=1024 expecting 1024 00:03:01.214 21:03:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.214 00:03:01.214 real 0m5.507s 00:03:01.214 user 0m1.802s 00:03:01.214 sys 0m3.240s 00:03:01.214 21:03:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.214 21:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:01.214 ************************************ 00:03:01.214 END TEST no_shrink_alloc 00:03:01.214 ************************************ 00:03:01.214 21:03:55 -- setup/hugepages.sh@217 -- # clear_hp 00:03:01.214 21:03:55 -- setup/hugepages.sh@37 -- # local node hp 00:03:01.214 21:03:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.214 21:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.214 21:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.214 21:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.214 21:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.477 21:03:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.477 21:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.477 21:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.477 21:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.477 21:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:01.477 21:03:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.477 21:03:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.477 00:03:01.477 real 0m22.222s 00:03:01.477 user 0m6.945s 00:03:01.477 sys 0m12.544s 00:03:01.477 21:03:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.477 21:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:01.477 ************************************ 00:03:01.477 END TEST hugepages 00:03:01.477 ************************************ 00:03:01.477 21:03:55 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:01.477 21:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.477 21:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.477 21:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:01.477 ************************************ 00:03:01.477 START TEST driver 00:03:01.477 ************************************ 00:03:01.477 21:03:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:03:01.477 * Looking for test storage... 00:03:01.477 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:01.477 21:03:55 -- setup/driver.sh@68 -- # setup reset 00:03:01.477 21:03:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.477 21:03:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.680 21:03:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:05.680 21:03:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.680 21:03:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.680 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:05.941 ************************************ 00:03:05.942 START TEST guess_driver 00:03:05.942 ************************************ 00:03:05.942 21:04:00 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:05.942 21:04:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:05.942 21:04:00 -- setup/driver.sh@47 -- # local fail=0 00:03:05.942 21:04:00 -- setup/driver.sh@49 -- # pick_driver 00:03:05.942 21:04:00 -- setup/driver.sh@36 -- # vfio 00:03:05.942 21:04:00 -- setup/driver.sh@21 -- # local iommu_grups 00:03:05.942 21:04:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:05.942 21:04:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:05.942 21:04:00 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:05.942 21:04:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:05.942 21:04:00 -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:03:05.942 21:04:00 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:05.942 21:04:00 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:05.942 21:04:00 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:05.942 21:04:00 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:05.942 21:04:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:05.942 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:05.942 21:04:00 -- setup/driver.sh@30 -- # return 0 00:03:05.942 21:04:00 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:05.942 21:04:00 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:05.942 21:04:00 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:05.942 21:04:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:05.942 Looking for driver=vfio-pci 00:03:05.942 21:04:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.942 21:04:00 -- setup/driver.sh@45 -- # setup output config 00:03:05.942 21:04:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.942 21:04:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:08.484 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.484 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.484 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.744 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.744 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.744 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.744 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.744 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.744 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.744 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.745 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.745 21:04:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.745 21:04:02 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.745 21:04:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:08.745 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:08.745 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:08.745 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.005 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.005 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.005 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.577 21:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.577 21:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.577 21:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.837 21:04:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:09.837 21:04:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:09.837 21:04:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.096 21:04:04 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:10.096 21:04:04 -- setup/driver.sh@65 -- # setup reset 00:03:10.096 21:04:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.096 21:04:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.304 00:03:14.304 real 0m8.509s 00:03:14.304 user 0m1.978s 00:03:14.304 sys 0m4.005s 00:03:14.304 21:04:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.304 21:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.304 ************************************ 00:03:14.304 END TEST guess_driver 00:03:14.304 ************************************ 00:03:14.565 00:03:14.565 real 0m12.984s 00:03:14.565 user 0m3.100s 00:03:14.565 sys 0m6.162s 00:03:14.565 21:04:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:14.565 21:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.565 ************************************ 00:03:14.565 END TEST driver 00:03:14.565 ************************************ 00:03:14.565 21:04:08 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:14.565 21:04:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.565 21:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.565 21:04:08 -- common/autotest_common.sh@10 -- # set +x 00:03:14.565 ************************************ 00:03:14.565 START TEST devices 00:03:14.565 ************************************ 00:03:14.565 21:04:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:14.565 * Looking for test storage... 00:03:14.565 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:14.565 21:04:08 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:14.565 21:04:08 -- setup/devices.sh@192 -- # setup reset 00:03:14.565 21:04:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.565 21:04:08 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.923 21:04:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:17.923 21:04:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:17.923 21:04:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:17.923 21:04:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:17.923 21:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:17.923 21:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:17.923 21:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:17.923 21:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.923 21:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:17.923 21:04:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:17.923 21:04:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:17.923 21:04:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:17.923 21:04:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:17.923 21:04:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:17.923 21:04:11 -- setup/devices.sh@196 -- # blocks=() 00:03:17.923 21:04:11 -- setup/devices.sh@196 -- # declare -a blocks 00:03:17.923 21:04:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:17.923 21:04:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:17.923 21:04:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:17.923 21:04:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:17.923 21:04:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:17.923 21:04:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:17.923 21:04:11 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:17.923 21:04:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:17.923 21:04:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:17.923 21:04:11 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:17.923 21:04:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:17.923 No valid GPT data, bailing 00:03:17.923 21:04:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.923 21:04:11 -- scripts/common.sh@391 -- # pt= 00:03:17.924 21:04:11 -- scripts/common.sh@392 -- # return 1 00:03:17.924 21:04:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:17.924 21:04:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:17.924 21:04:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:17.924 21:04:11 -- setup/common.sh@80 -- # echo 960197124096 00:03:17.924 21:04:11 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:17.924 21:04:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.924 21:04:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:17.924 21:04:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:17.924 21:04:11 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:17.924 21:04:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:17.924 21:04:11 -- setup/devices.sh@202 -- # pci=0000:03:00.0 00:03:17.924 21:04:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\3\:\0\0\.\0* ]] 00:03:17.924 21:04:11 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:17.924 21:04:11 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:17.924 21:04:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:17.924 No valid GPT data, bailing 00:03:17.924 21:04:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:17.924 21:04:11 -- scripts/common.sh@391 -- # pt= 00:03:17.924 21:04:11 -- scripts/common.sh@392 -- # return 1 00:03:17.924 21:04:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:17.924 21:04:11 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:17.924 21:04:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:17.924 21:04:11 -- setup/common.sh@80 -- # echo 960197124096 00:03:17.924 21:04:11 -- setup/devices.sh@204 -- # (( 960197124096 >= min_disk_size )) 00:03:17.924 21:04:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.924 21:04:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:03:00.0 00:03:17.924 21:04:11 -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:03:17.924 21:04:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:17.924 21:04:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:17.924 21:04:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.924 21:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.924 21:04:11 -- common/autotest_common.sh@10 -- # set +x 00:03:17.924 ************************************ 00:03:17.924 START TEST nvme_mount 00:03:17.924 ************************************ 00:03:17.924 21:04:12 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:17.924 21:04:12 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:17.924 21:04:12 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:17.924 21:04:12 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.924 21:04:12 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.924 21:04:12 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:17.924 21:04:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.924 21:04:12 -- setup/common.sh@40 -- # local part_no=1 00:03:17.924 21:04:12 -- setup/common.sh@41 -- # local size=1073741824 00:03:17.924 21:04:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.924 21:04:12 -- setup/common.sh@44 -- # parts=() 00:03:17.924 21:04:12 -- setup/common.sh@44 -- # local parts 00:03:17.924 21:04:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.924 21:04:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.924 21:04:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.924 21:04:12 -- setup/common.sh@46 -- # (( part++ )) 00:03:17.924 21:04:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.924 21:04:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:17.924 21:04:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.924 21:04:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:18.866 Creating new GPT entries in memory. 00:03:18.866 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:18.866 other utilities. 00:03:18.866 21:04:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:18.866 21:04:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.866 21:04:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:18.866 21:04:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:18.866 21:04:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:19.805 Creating new GPT entries in memory. 00:03:19.805 The operation has completed successfully. 00:03:19.805 21:04:14 -- setup/common.sh@57 -- # (( part++ )) 00:03:19.805 21:04:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.805 21:04:14 -- setup/common.sh@62 -- # wait 1196895 00:03:20.064 21:04:14 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.064 21:04:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:20.064 21:04:14 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.064 21:04:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:20.064 21:04:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:20.064 21:04:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.064 21:04:14 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.064 21:04:14 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:20.064 21:04:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:20.064 21:04:14 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.064 21:04:14 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.064 21:04:14 -- setup/devices.sh@53 -- # local found=0 00:03:20.064 21:04:14 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.064 21:04:14 -- setup/devices.sh@56 -- # : 00:03:20.064 21:04:14 -- setup/devices.sh@59 -- # local pci status 00:03:20.064 21:04:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.064 21:04:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:20.064 21:04:14 -- setup/devices.sh@47 -- # setup output config 00:03:20.064 21:04:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.064 21:04:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:23.356 21:04:16 -- setup/devices.sh@63 -- # found=1 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:16 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:23.356 21:04:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:23.356 21:04:17 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:23.356 21:04:17 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.356 21:04:17 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.356 21:04:17 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:23.356 21:04:17 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:23.356 21:04:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:23.356 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:23.356 21:04:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:23.356 21:04:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:23.356 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:23.356 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:23.356 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:23.356 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:23.356 21:04:17 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:23.356 21:04:17 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:23.356 21:04:17 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:23.356 21:04:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:23.356 21:04:17 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.356 21:04:17 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:23.356 21:04:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:23.356 21:04:17 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.356 21:04:17 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.356 21:04:17 -- setup/devices.sh@53 -- # local found=0 00:03:23.356 21:04:17 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.356 21:04:17 -- setup/devices.sh@56 -- # : 00:03:23.356 21:04:17 -- setup/devices.sh@59 -- # local pci status 00:03:23.356 21:04:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.356 21:04:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:23.356 21:04:17 -- setup/devices.sh@47 -- # setup output config 00:03:23.356 21:04:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.615 21:04:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:26.153 21:04:20 -- setup/devices.sh@63 -- # found=1 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.153 21:04:20 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:26.153 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.412 21:04:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.412 21:04:20 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:26.412 21:04:20 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.412 21:04:20 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.412 21:04:20 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.412 21:04:20 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.412 21:04:20 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:26.412 21:04:20 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:26.412 21:04:20 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:26.412 21:04:20 -- setup/devices.sh@50 -- # local mount_point= 00:03:26.412 21:04:20 -- setup/devices.sh@51 -- # local test_file= 00:03:26.412 21:04:20 -- setup/devices.sh@53 -- # local found=0 00:03:26.412 21:04:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:26.412 21:04:20 -- setup/devices.sh@59 -- # local pci status 00:03:26.412 21:04:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.412 21:04:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:26.413 21:04:20 -- setup/devices.sh@47 -- # setup output config 00:03:26.413 21:04:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.413 21:04:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:29.702 21:04:23 -- setup/devices.sh@63 -- # found=1 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.702 21:04:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.702 21:04:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:29.702 21:04:23 -- setup/devices.sh@68 -- # return 0 00:03:29.702 21:04:23 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:29.702 21:04:23 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.702 21:04:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.702 21:04:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.703 21:04:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:29.703 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.703 00:03:29.703 real 0m11.665s 00:03:29.703 user 0m2.952s 00:03:29.703 sys 0m5.879s 00:03:29.703 21:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.703 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:29.703 ************************************ 00:03:29.703 END TEST nvme_mount 00:03:29.703 ************************************ 00:03:29.703 21:04:23 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:29.703 21:04:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.703 21:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.703 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:29.703 ************************************ 00:03:29.703 START TEST dm_mount 00:03:29.703 ************************************ 00:03:29.703 21:04:23 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:29.703 21:04:23 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:29.703 21:04:23 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:29.703 21:04:23 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:29.703 21:04:23 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:29.703 21:04:23 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:29.703 21:04:23 -- setup/common.sh@40 -- # local part_no=2 00:03:29.703 21:04:23 -- setup/common.sh@41 -- # local size=1073741824 00:03:29.703 21:04:23 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:29.703 21:04:23 -- setup/common.sh@44 -- # parts=() 00:03:29.703 21:04:23 -- setup/common.sh@44 -- # local parts 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.703 21:04:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part++ )) 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.703 21:04:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part++ )) 00:03:29.703 21:04:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.703 21:04:23 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:29.703 21:04:23 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:29.703 21:04:23 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:30.640 Creating new GPT entries in memory. 00:03:30.640 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:30.640 other utilities. 00:03:30.640 21:04:24 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:30.640 21:04:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.640 21:04:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:30.640 21:04:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:30.640 21:04:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:32.026 Creating new GPT entries in memory. 00:03:32.026 The operation has completed successfully. 00:03:32.026 21:04:25 -- setup/common.sh@57 -- # (( part++ )) 00:03:32.026 21:04:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.026 21:04:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.026 21:04:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.026 21:04:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:32.962 The operation has completed successfully. 00:03:32.962 21:04:26 -- setup/common.sh@57 -- # (( part++ )) 00:03:32.962 21:04:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.962 21:04:26 -- setup/common.sh@62 -- # wait 1201769 00:03:32.962 21:04:26 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:32.962 21:04:26 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:32.962 21:04:26 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.962 21:04:26 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:32.962 21:04:26 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:32.962 21:04:26 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.962 21:04:26 -- setup/devices.sh@161 -- # break 00:03:32.962 21:04:26 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.962 21:04:26 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:32.962 21:04:26 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:32.962 21:04:26 -- setup/devices.sh@166 -- # dm=dm-0 00:03:32.963 21:04:26 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:32.963 21:04:26 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:32.963 21:04:26 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:32.963 21:04:26 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:32.963 21:04:26 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:32.963 21:04:26 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.963 21:04:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:32.963 21:04:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:32.963 21:04:26 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.963 21:04:26 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:32.963 21:04:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:32.963 21:04:26 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:32.963 21:04:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.963 21:04:27 -- setup/devices.sh@53 -- # local found=0 00:03:32.963 21:04:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:32.963 21:04:27 -- setup/devices.sh@56 -- # : 00:03:32.963 21:04:27 -- setup/devices.sh@59 -- # local pci status 00:03:32.963 21:04:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.963 21:04:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:32.963 21:04:27 -- setup/devices.sh@47 -- # setup output config 00:03:32.963 21:04:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.963 21:04:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:35.496 21:04:29 -- setup/devices.sh@63 -- # found=1 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.496 21:04:29 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:35.496 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.755 21:04:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.755 21:04:29 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:35.755 21:04:29 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:35.755 21:04:29 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:35.755 21:04:29 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.755 21:04:29 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:35.755 21:04:29 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:35.755 21:04:29 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:35.755 21:04:29 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:35.755 21:04:29 -- setup/devices.sh@50 -- # local mount_point= 00:03:35.755 21:04:29 -- setup/devices.sh@51 -- # local test_file= 00:03:35.755 21:04:29 -- setup/devices.sh@53 -- # local found=0 00:03:35.755 21:04:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:35.755 21:04:29 -- setup/devices.sh@59 -- # local pci status 00:03:35.755 21:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.755 21:04:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:35.755 21:04:29 -- setup/devices.sh@47 -- # setup output config 00:03:35.755 21:04:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.755 21:04:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:38.291 21:04:32 -- setup/devices.sh@63 -- # found=1 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.291 21:04:32 -- setup/devices.sh@62 -- # [[ 0000:03:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:38.291 21:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.551 21:04:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.551 21:04:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.551 21:04:32 -- setup/devices.sh@68 -- # return 0 00:03:38.551 21:04:32 -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.551 21:04:32 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:38.551 21:04:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.551 21:04:32 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.551 21:04:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.551 21:04:32 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:38.551 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.551 21:04:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.551 21:04:32 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:38.551 00:03:38.551 real 0m8.909s 00:03:38.551 user 0m1.812s 00:03:38.551 sys 0m3.665s 00:03:38.551 21:04:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.551 21:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:38.551 ************************************ 00:03:38.551 END TEST dm_mount 00:03:38.551 ************************************ 00:03:38.551 21:04:32 -- setup/devices.sh@1 -- # cleanup 00:03:38.551 21:04:32 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.551 21:04:32 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.551 21:04:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.551 21:04:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.551 21:04:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.551 21:04:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.819 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:38.819 /dev/nvme0n1: 8 bytes were erased at offset 0xdf90355e00 (gpt): 45 46 49 20 50 41 52 54 00:03:38.819 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.819 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:38.819 21:04:33 -- setup/devices.sh@12 -- # cleanup_dm 00:03:38.819 21:04:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:38.819 21:04:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.819 21:04:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.819 21:04:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.819 21:04:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.819 21:04:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:38.819 00:03:38.819 real 0m24.383s 00:03:38.819 user 0m5.969s 00:03:38.819 sys 0m11.754s 00:03:38.819 21:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.819 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:38.819 ************************************ 00:03:38.819 END TEST devices 00:03:38.819 ************************************ 00:03:39.078 00:03:39.078 real 1m21.255s 00:03:39.078 user 0m22.389s 00:03:39.078 sys 0m42.320s 00:03:39.078 21:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.078 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:39.078 ************************************ 00:03:39.078 END TEST setup.sh 00:03:39.078 ************************************ 00:03:39.078 21:04:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:42.365 Hugepages 00:03:42.365 node hugesize free / total 00:03:42.365 node0 1048576kB 0 / 0 00:03:42.365 node0 2048kB 2048 / 2048 00:03:42.365 node1 1048576kB 0 / 0 00:03:42.365 node1 2048kB 0 / 0 00:03:42.365 00:03:42.365 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.365 NVMe 0000:03:00.0 1344 51c3 0 nvme nvme1 nvme1n1 00:03:42.365 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:42.365 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:42.365 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:42.365 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:42.365 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:42.365 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:42.365 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:42.365 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:42.365 NVMe 0000:c9:00.0 144d a80a 1 nvme nvme0 nvme0n1 00:03:42.365 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:42.365 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:42.365 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:42.365 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:42.365 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:42.365 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:42.365 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:42.365 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:42.365 21:04:36 -- spdk/autotest.sh@130 -- # uname -s 00:03:42.365 21:04:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:42.365 21:04:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:42.365 21:04:36 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:44.951 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.951 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:44.951 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:45.543 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:45.803 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:46.062 21:04:40 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:47.001 21:04:41 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:47.001 21:04:41 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:47.001 21:04:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.001 21:04:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:47.001 21:04:41 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:47.001 21:04:41 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:47.001 21:04:41 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.001 21:04:41 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.001 21:04:41 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:47.001 21:04:41 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:03:47.001 21:04:41 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:47.001 21:04:41 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.537 Waiting for block devices as requested 00:03:49.537 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:03:49.537 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:49.798 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:49.798 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:49.798 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:03:49.798 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:50.057 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.057 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:50.057 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.057 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:50.316 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.316 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.316 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.316 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:50.575 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.575 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:50.575 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:03:50.835 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:03:51.094 21:04:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.094 21:04:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:03:00.0 00:03:51.094 21:04:45 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:51.094 21:04:45 -- common/autotest_common.sh@1488 -- # grep 0000:03:00.0/nvme/nvme 00:03:51.094 21:04:45 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/0000:03:00.0/nvme/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x5e' 00:03:51.095 21:04:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.095 21:04:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.095 21:04:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1543 -- # continue 00:03:51.095 21:04:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.095 21:04:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:51.095 21:04:45 -- common/autotest_common.sh@1488 -- # grep 0000:c9:00.0/nvme/nvme 00:03:51.095 21:04:45 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.095 21:04:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:51.095 21:04:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.095 21:04:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.095 21:04:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.095 21:04:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.095 21:04:45 -- common/autotest_common.sh@1543 -- # continue 00:03:51.095 21:04:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:51.095 21:04:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:51.095 21:04:45 -- common/autotest_common.sh@10 -- # set +x 00:03:51.095 21:04:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:51.095 21:04:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:51.095 21:04:45 -- common/autotest_common.sh@10 -- # set +x 00:03:51.095 21:04:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:54.383 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.383 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:54.383 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:54.953 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.213 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:03:55.213 21:04:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:55.213 21:04:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:55.213 21:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:55.213 21:04:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:55.213 21:04:49 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:55.213 21:04:49 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.213 21:04:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:55.213 21:04:49 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:55.213 21:04:49 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:55.213 21:04:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:55.213 21:04:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:55.213 21:04:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.472 21:04:49 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.472 21:04:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:55.472 21:04:49 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:03:55.472 21:04:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:03:55.472 21:04:49 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:55.472 21:04:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:03:00.0/device 00:03:55.472 21:04:49 -- common/autotest_common.sh@1566 -- # device=0x51c3 00:03:55.472 21:04:49 -- common/autotest_common.sh@1567 -- # [[ 0x51c3 == \0\x\0\a\5\4 ]] 00:03:55.472 21:04:49 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:55.472 21:04:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:03:55.472 21:04:49 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:55.472 21:04:49 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:55.472 21:04:49 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:03:55.472 21:04:49 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:03:55.472 21:04:49 -- common/autotest_common.sh@1579 -- # return 0 00:03:55.472 21:04:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:55.472 21:04:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:55.472 21:04:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:55.472 21:04:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:55.472 21:04:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:55.472 21:04:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:55.472 21:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:55.472 21:04:49 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:55.472 21:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.472 21:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.472 21:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:55.472 ************************************ 00:03:55.472 START TEST env 00:03:55.472 ************************************ 00:03:55.472 21:04:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:55.732 * Looking for test storage... 00:03:55.732 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:03:55.732 21:04:49 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:55.732 21:04:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.732 21:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.732 21:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:55.732 ************************************ 00:03:55.732 START TEST env_memory 00:03:55.732 ************************************ 00:03:55.732 21:04:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:55.732 00:03:55.732 00:03:55.732 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.732 http://cunit.sourceforge.net/ 00:03:55.732 00:03:55.732 00:03:55.732 Suite: memory 00:03:55.732 Test: alloc and free memory map ...[2024-04-23 21:04:49.936817] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.732 passed 00:03:55.732 Test: mem map translation ...[2024-04-23 21:04:49.983969] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.732 [2024-04-23 21:04:49.984001] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.732 [2024-04-23 21:04:49.984081] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.732 [2024-04-23 21:04:49.984098] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.993 passed 00:03:55.993 Test: mem map registration ...[2024-04-23 21:04:50.073648] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:55.993 [2024-04-23 21:04:50.073717] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:55.993 passed 00:03:55.993 Test: mem map adjacent registrations ...passed 00:03:55.993 00:03:55.993 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.993 suites 1 1 n/a 0 0 00:03:55.993 tests 4 4 4 0 0 00:03:55.993 asserts 152 152 152 0 n/a 00:03:55.993 00:03:55.993 Elapsed time = 0.297 seconds 00:03:55.993 00:03:55.993 real 0m0.319s 00:03:55.993 user 0m0.298s 00:03:55.993 sys 0m0.020s 00:03:55.993 21:04:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:55.993 21:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:55.993 ************************************ 00:03:55.993 END TEST env_memory 00:03:55.993 ************************************ 00:03:55.993 21:04:50 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:55.993 21:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.993 21:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.993 21:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:56.252 ************************************ 00:03:56.252 START TEST env_vtophys 00:03:56.252 ************************************ 00:03:56.252 21:04:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.252 EAL: lib.eal log level changed from notice to debug 00:03:56.252 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.252 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.252 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.252 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.252 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.252 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.252 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.252 EAL: Detected lcore 7 as core 7 on socket 0 00:03:56.252 EAL: Detected lcore 8 as core 8 on socket 0 00:03:56.252 EAL: Detected lcore 9 as core 9 on socket 0 00:03:56.252 EAL: Detected lcore 10 as core 10 on socket 0 00:03:56.252 EAL: Detected lcore 11 as core 11 on socket 0 00:03:56.252 EAL: Detected lcore 12 as core 12 on socket 0 00:03:56.252 EAL: Detected lcore 13 as core 13 on socket 0 00:03:56.252 EAL: Detected lcore 14 as core 14 on socket 0 00:03:56.252 EAL: Detected lcore 15 as core 15 on socket 0 00:03:56.252 EAL: Detected lcore 16 as core 16 on socket 0 00:03:56.252 EAL: Detected lcore 17 as core 17 on socket 0 00:03:56.252 EAL: Detected lcore 18 as core 18 on socket 0 00:03:56.252 EAL: Detected lcore 19 as core 19 on socket 0 00:03:56.252 EAL: Detected lcore 20 as core 20 on socket 0 00:03:56.253 EAL: Detected lcore 21 as core 21 on socket 0 00:03:56.253 EAL: Detected lcore 22 as core 22 on socket 0 00:03:56.253 EAL: Detected lcore 23 as core 23 on socket 0 00:03:56.253 EAL: Detected lcore 24 as core 24 on socket 0 00:03:56.253 EAL: Detected lcore 25 as core 25 on socket 0 00:03:56.253 EAL: Detected lcore 26 as core 26 on socket 0 00:03:56.253 EAL: Detected lcore 27 as core 27 on socket 0 00:03:56.253 EAL: Detected lcore 28 as core 28 on socket 0 00:03:56.253 EAL: Detected lcore 29 as core 29 on socket 0 00:03:56.253 EAL: Detected lcore 30 as core 30 on socket 0 00:03:56.253 EAL: Detected lcore 31 as core 31 on socket 0 00:03:56.253 EAL: Detected lcore 32 as core 0 on socket 1 00:03:56.253 EAL: Detected lcore 33 as core 1 on socket 1 00:03:56.253 EAL: Detected lcore 34 as core 2 on socket 1 00:03:56.253 EAL: Detected lcore 35 as core 3 on socket 1 00:03:56.253 EAL: Detected lcore 36 as core 4 on socket 1 00:03:56.253 EAL: Detected lcore 37 as core 5 on socket 1 00:03:56.253 EAL: Detected lcore 38 as core 6 on socket 1 00:03:56.253 EAL: Detected lcore 39 as core 7 on socket 1 00:03:56.253 EAL: Detected lcore 40 as core 8 on socket 1 00:03:56.253 EAL: Detected lcore 41 as core 9 on socket 1 00:03:56.253 EAL: Detected lcore 42 as core 10 on socket 1 00:03:56.253 EAL: Detected lcore 43 as core 11 on socket 1 00:03:56.253 EAL: Detected lcore 44 as core 12 on socket 1 00:03:56.253 EAL: Detected lcore 45 as core 13 on socket 1 00:03:56.253 EAL: Detected lcore 46 as core 14 on socket 1 00:03:56.253 EAL: Detected lcore 47 as core 15 on socket 1 00:03:56.253 EAL: Detected lcore 48 as core 16 on socket 1 00:03:56.253 EAL: Detected lcore 49 as core 17 on socket 1 00:03:56.253 EAL: Detected lcore 50 as core 18 on socket 1 00:03:56.253 EAL: Detected lcore 51 as core 19 on socket 1 00:03:56.253 EAL: Detected lcore 52 as core 20 on socket 1 00:03:56.253 EAL: Detected lcore 53 as core 21 on socket 1 00:03:56.253 EAL: Detected lcore 54 as core 22 on socket 1 00:03:56.253 EAL: Detected lcore 55 as core 23 on socket 1 00:03:56.253 EAL: Detected lcore 56 as core 24 on socket 1 00:03:56.253 EAL: Detected lcore 57 as core 25 on socket 1 00:03:56.253 EAL: Detected lcore 58 as core 26 on socket 1 00:03:56.253 EAL: Detected lcore 59 as core 27 on socket 1 00:03:56.253 EAL: Detected lcore 60 as core 28 on socket 1 00:03:56.253 EAL: Detected lcore 61 as core 29 on socket 1 00:03:56.253 EAL: Detected lcore 62 as core 30 on socket 1 00:03:56.253 EAL: Detected lcore 63 as core 31 on socket 1 00:03:56.253 EAL: Detected lcore 64 as core 0 on socket 0 00:03:56.253 EAL: Detected lcore 65 as core 1 on socket 0 00:03:56.253 EAL: Detected lcore 66 as core 2 on socket 0 00:03:56.253 EAL: Detected lcore 67 as core 3 on socket 0 00:03:56.253 EAL: Detected lcore 68 as core 4 on socket 0 00:03:56.253 EAL: Detected lcore 69 as core 5 on socket 0 00:03:56.253 EAL: Detected lcore 70 as core 6 on socket 0 00:03:56.253 EAL: Detected lcore 71 as core 7 on socket 0 00:03:56.253 EAL: Detected lcore 72 as core 8 on socket 0 00:03:56.253 EAL: Detected lcore 73 as core 9 on socket 0 00:03:56.253 EAL: Detected lcore 74 as core 10 on socket 0 00:03:56.253 EAL: Detected lcore 75 as core 11 on socket 0 00:03:56.253 EAL: Detected lcore 76 as core 12 on socket 0 00:03:56.253 EAL: Detected lcore 77 as core 13 on socket 0 00:03:56.253 EAL: Detected lcore 78 as core 14 on socket 0 00:03:56.253 EAL: Detected lcore 79 as core 15 on socket 0 00:03:56.253 EAL: Detected lcore 80 as core 16 on socket 0 00:03:56.253 EAL: Detected lcore 81 as core 17 on socket 0 00:03:56.253 EAL: Detected lcore 82 as core 18 on socket 0 00:03:56.253 EAL: Detected lcore 83 as core 19 on socket 0 00:03:56.253 EAL: Detected lcore 84 as core 20 on socket 0 00:03:56.253 EAL: Detected lcore 85 as core 21 on socket 0 00:03:56.253 EAL: Detected lcore 86 as core 22 on socket 0 00:03:56.253 EAL: Detected lcore 87 as core 23 on socket 0 00:03:56.253 EAL: Detected lcore 88 as core 24 on socket 0 00:03:56.253 EAL: Detected lcore 89 as core 25 on socket 0 00:03:56.253 EAL: Detected lcore 90 as core 26 on socket 0 00:03:56.253 EAL: Detected lcore 91 as core 27 on socket 0 00:03:56.253 EAL: Detected lcore 92 as core 28 on socket 0 00:03:56.253 EAL: Detected lcore 93 as core 29 on socket 0 00:03:56.253 EAL: Detected lcore 94 as core 30 on socket 0 00:03:56.253 EAL: Detected lcore 95 as core 31 on socket 0 00:03:56.253 EAL: Detected lcore 96 as core 0 on socket 1 00:03:56.253 EAL: Detected lcore 97 as core 1 on socket 1 00:03:56.253 EAL: Detected lcore 98 as core 2 on socket 1 00:03:56.253 EAL: Detected lcore 99 as core 3 on socket 1 00:03:56.253 EAL: Detected lcore 100 as core 4 on socket 1 00:03:56.253 EAL: Detected lcore 101 as core 5 on socket 1 00:03:56.253 EAL: Detected lcore 102 as core 6 on socket 1 00:03:56.253 EAL: Detected lcore 103 as core 7 on socket 1 00:03:56.253 EAL: Detected lcore 104 as core 8 on socket 1 00:03:56.253 EAL: Detected lcore 105 as core 9 on socket 1 00:03:56.253 EAL: Detected lcore 106 as core 10 on socket 1 00:03:56.253 EAL: Detected lcore 107 as core 11 on socket 1 00:03:56.253 EAL: Detected lcore 108 as core 12 on socket 1 00:03:56.253 EAL: Detected lcore 109 as core 13 on socket 1 00:03:56.253 EAL: Detected lcore 110 as core 14 on socket 1 00:03:56.253 EAL: Detected lcore 111 as core 15 on socket 1 00:03:56.253 EAL: Detected lcore 112 as core 16 on socket 1 00:03:56.253 EAL: Detected lcore 113 as core 17 on socket 1 00:03:56.253 EAL: Detected lcore 114 as core 18 on socket 1 00:03:56.253 EAL: Detected lcore 115 as core 19 on socket 1 00:03:56.253 EAL: Detected lcore 116 as core 20 on socket 1 00:03:56.253 EAL: Detected lcore 117 as core 21 on socket 1 00:03:56.253 EAL: Detected lcore 118 as core 22 on socket 1 00:03:56.253 EAL: Detected lcore 119 as core 23 on socket 1 00:03:56.253 EAL: Detected lcore 120 as core 24 on socket 1 00:03:56.253 EAL: Detected lcore 121 as core 25 on socket 1 00:03:56.253 EAL: Detected lcore 122 as core 26 on socket 1 00:03:56.253 EAL: Detected lcore 123 as core 27 on socket 1 00:03:56.253 EAL: Detected lcore 124 as core 28 on socket 1 00:03:56.253 EAL: Detected lcore 125 as core 29 on socket 1 00:03:56.253 EAL: Detected lcore 126 as core 30 on socket 1 00:03:56.253 EAL: Detected lcore 127 as core 31 on socket 1 00:03:56.253 EAL: Maximum logical cores by configuration: 128 00:03:56.253 EAL: Detected CPU lcores: 128 00:03:56.253 EAL: Detected NUMA nodes: 2 00:03:56.253 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:56.253 EAL: Detected shared linkage of DPDK 00:03:56.253 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.253 EAL: Bus pci wants IOVA as 'DC' 00:03:56.253 EAL: Buses did not request a specific IOVA mode. 00:03:56.253 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.253 EAL: Selected IOVA mode 'VA' 00:03:56.253 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.253 EAL: Probing VFIO support... 00:03:56.253 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.253 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.253 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.253 EAL: VFIO support initialized 00:03:56.253 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.253 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.253 EAL: Setting up physically contiguous memory... 00:03:56.253 EAL: Setting maximum number of open files to 524288 00:03:56.253 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.253 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.253 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.253 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.253 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.253 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.253 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.253 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.253 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.253 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.253 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.253 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.253 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.253 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.253 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.254 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.254 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.254 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.254 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.254 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.254 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.254 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.254 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.254 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.254 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.254 EAL: Hugepages will be freed exactly as allocated. 00:03:56.254 EAL: No shared files mode enabled, IPC is disabled 00:03:56.254 EAL: No shared files mode enabled, IPC is disabled 00:03:56.254 EAL: TSC frequency is ~1900000 KHz 00:03:56.254 EAL: Main lcore 0 is ready (tid=7fcbc36a3a40;cpuset=[0]) 00:03:56.254 EAL: Trying to obtain current memory policy. 00:03:56.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.254 EAL: Restoring previous memory policy: 0 00:03:56.254 EAL: request: mp_malloc_sync 00:03:56.254 EAL: No shared files mode enabled, IPC is disabled 00:03:56.254 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.254 EAL: No shared files mode enabled, IPC is disabled 00:03:56.254 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.254 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.254 00:03:56.254 00:03:56.254 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.254 http://cunit.sourceforge.net/ 00:03:56.254 00:03:56.254 00:03:56.254 Suite: components_suite 00:03:56.513 Test: vtophys_malloc_test ...passed 00:03:56.513 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.513 EAL: Restoring previous memory policy: 4 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.513 EAL: Trying to obtain current memory policy. 00:03:56.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.513 EAL: Restoring previous memory policy: 4 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.513 EAL: Trying to obtain current memory policy. 00:03:56.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.513 EAL: Restoring previous memory policy: 4 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.513 EAL: Trying to obtain current memory policy. 00:03:56.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.513 EAL: Restoring previous memory policy: 4 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.513 EAL: Trying to obtain current memory policy. 00:03:56.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.513 EAL: Restoring previous memory policy: 4 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.513 EAL: request: mp_malloc_sync 00:03:56.513 EAL: No shared files mode enabled, IPC is disabled 00:03:56.513 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.772 EAL: Trying to obtain current memory policy. 00:03:56.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.772 EAL: Restoring previous memory policy: 4 00:03:56.772 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.772 EAL: request: mp_malloc_sync 00:03:56.772 EAL: No shared files mode enabled, IPC is disabled 00:03:56.772 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.772 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.772 EAL: request: mp_malloc_sync 00:03:56.772 EAL: No shared files mode enabled, IPC is disabled 00:03:56.772 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.772 EAL: Trying to obtain current memory policy. 00:03:56.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.772 EAL: Restoring previous memory policy: 4 00:03:56.772 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.772 EAL: request: mp_malloc_sync 00:03:56.772 EAL: No shared files mode enabled, IPC is disabled 00:03:56.772 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.772 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.772 EAL: request: mp_malloc_sync 00:03:56.772 EAL: No shared files mode enabled, IPC is disabled 00:03:56.772 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.031 EAL: Trying to obtain current memory policy. 00:03:57.031 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.031 EAL: Restoring previous memory policy: 4 00:03:57.031 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.031 EAL: request: mp_malloc_sync 00:03:57.031 EAL: No shared files mode enabled, IPC is disabled 00:03:57.031 EAL: Heap on socket 0 was expanded by 258MB 00:03:57.031 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.031 EAL: request: mp_malloc_sync 00:03:57.031 EAL: No shared files mode enabled, IPC is disabled 00:03:57.031 EAL: Heap on socket 0 was shrunk by 258MB 00:03:57.290 EAL: Trying to obtain current memory policy. 00:03:57.290 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.290 EAL: Restoring previous memory policy: 4 00:03:57.290 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.290 EAL: request: mp_malloc_sync 00:03:57.290 EAL: No shared files mode enabled, IPC is disabled 00:03:57.290 EAL: Heap on socket 0 was expanded by 514MB 00:03:57.550 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.550 EAL: request: mp_malloc_sync 00:03:57.550 EAL: No shared files mode enabled, IPC is disabled 00:03:57.550 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.809 EAL: Trying to obtain current memory policy. 00:03:57.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.068 EAL: Restoring previous memory policy: 4 00:03:58.068 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.068 EAL: request: mp_malloc_sync 00:03:58.068 EAL: No shared files mode enabled, IPC is disabled 00:03:58.068 EAL: Heap on socket 0 was expanded by 1026MB 00:03:58.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.897 EAL: request: mp_malloc_sync 00:03:58.897 EAL: No shared files mode enabled, IPC is disabled 00:03:58.897 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.156 passed 00:03:59.156 00:03:59.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.156 suites 1 1 n/a 0 0 00:03:59.156 tests 2 2 2 0 0 00:03:59.156 asserts 497 497 497 0 n/a 00:03:59.156 00:03:59.156 Elapsed time = 2.880 seconds 00:03:59.156 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.156 EAL: request: mp_malloc_sync 00:03:59.156 EAL: No shared files mode enabled, IPC is disabled 00:03:59.156 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.156 EAL: No shared files mode enabled, IPC is disabled 00:03:59.156 EAL: No shared files mode enabled, IPC is disabled 00:03:59.156 EAL: No shared files mode enabled, IPC is disabled 00:03:59.416 00:03:59.416 real 0m3.105s 00:03:59.416 user 0m2.420s 00:03:59.416 sys 0m0.638s 00:03:59.416 21:04:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.416 21:04:53 -- common/autotest_common.sh@10 -- # set +x 00:03:59.416 ************************************ 00:03:59.416 END TEST env_vtophys 00:03:59.416 ************************************ 00:03:59.416 21:04:53 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.416 21:04:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.416 21:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.416 21:04:53 -- common/autotest_common.sh@10 -- # set +x 00:03:59.416 ************************************ 00:03:59.416 START TEST env_pci 00:03:59.416 ************************************ 00:03:59.416 21:04:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.416 00:03:59.416 00:03:59.416 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.416 http://cunit.sourceforge.net/ 00:03:59.416 00:03:59.416 00:03:59.416 Suite: pci 00:03:59.416 Test: pci_hook ...[2024-04-23 21:04:53.576044] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1213259 has claimed it 00:03:59.416 EAL: Cannot find device (10000:00:01.0) 00:03:59.416 EAL: Failed to attach device on primary process 00:03:59.416 passed 00:03:59.416 00:03:59.416 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.416 suites 1 1 n/a 0 0 00:03:59.416 tests 1 1 1 0 0 00:03:59.416 asserts 25 25 25 0 n/a 00:03:59.416 00:03:59.416 Elapsed time = 0.052 seconds 00:03:59.416 00:03:59.416 real 0m0.103s 00:03:59.416 user 0m0.032s 00:03:59.416 sys 0m0.071s 00:03:59.416 21:04:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.416 21:04:53 -- common/autotest_common.sh@10 -- # set +x 00:03:59.416 ************************************ 00:03:59.416 END TEST env_pci 00:03:59.416 ************************************ 00:03:59.416 21:04:53 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.416 21:04:53 -- env/env.sh@15 -- # uname 00:03:59.676 21:04:53 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.676 21:04:53 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.676 21:04:53 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.676 21:04:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:59.676 21:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.676 21:04:53 -- common/autotest_common.sh@10 -- # set +x 00:03:59.676 ************************************ 00:03:59.676 START TEST env_dpdk_post_init 00:03:59.676 ************************************ 00:03:59.676 21:04:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.676 EAL: Detected CPU lcores: 128 00:03:59.676 EAL: Detected NUMA nodes: 2 00:03:59.676 EAL: Detected shared linkage of DPDK 00:03:59.676 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.676 EAL: Selected IOVA mode 'VA' 00:03:59.676 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.676 EAL: VFIO support initialized 00:03:59.676 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.936 EAL: Using IOMMU type 1 (Type 1) 00:03:59.936 EAL: Probe PCI driver: spdk_nvme (1344:51c3) device: 0000:03:00.0 (socket 0) 00:04:00.195 EAL: Ignore mapping IO port bar(1) 00:04:00.195 EAL: Ignore mapping IO port bar(3) 00:04:00.195 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:00.456 EAL: Ignore mapping IO port bar(1) 00:04:00.456 EAL: Ignore mapping IO port bar(3) 00:04:00.456 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:00.718 EAL: Ignore mapping IO port bar(1) 00:04:00.718 EAL: Ignore mapping IO port bar(3) 00:04:00.718 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:00.980 EAL: Ignore mapping IO port bar(1) 00:04:00.980 EAL: Ignore mapping IO port bar(3) 00:04:00.980 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:00.980 EAL: Ignore mapping IO port bar(1) 00:04:00.980 EAL: Ignore mapping IO port bar(3) 00:04:01.240 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:01.240 EAL: Ignore mapping IO port bar(1) 00:04:01.240 EAL: Ignore mapping IO port bar(3) 00:04:01.500 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:01.500 EAL: Ignore mapping IO port bar(1) 00:04:01.500 EAL: Ignore mapping IO port bar(3) 00:04:01.761 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:01.761 EAL: Ignore mapping IO port bar(1) 00:04:01.761 EAL: Ignore mapping IO port bar(3) 00:04:01.761 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:02.022 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:c9:00.0 (socket 1) 00:04:02.283 EAL: Ignore mapping IO port bar(1) 00:04:02.283 EAL: Ignore mapping IO port bar(3) 00:04:02.283 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:02.544 EAL: Ignore mapping IO port bar(1) 00:04:02.544 EAL: Ignore mapping IO port bar(3) 00:04:02.544 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:02.544 EAL: Ignore mapping IO port bar(1) 00:04:02.544 EAL: Ignore mapping IO port bar(3) 00:04:02.805 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:02.805 EAL: Ignore mapping IO port bar(1) 00:04:02.805 EAL: Ignore mapping IO port bar(3) 00:04:03.065 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:03.065 EAL: Ignore mapping IO port bar(1) 00:04:03.065 EAL: Ignore mapping IO port bar(3) 00:04:03.326 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:03.326 EAL: Ignore mapping IO port bar(1) 00:04:03.326 EAL: Ignore mapping IO port bar(3) 00:04:03.326 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:03.587 EAL: Ignore mapping IO port bar(1) 00:04:03.587 EAL: Ignore mapping IO port bar(3) 00:04:03.587 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:03.847 EAL: Ignore mapping IO port bar(1) 00:04:03.847 EAL: Ignore mapping IO port bar(3) 00:04:03.847 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:04.782 EAL: Releasing PCI mapped resource for 0000:03:00.0 00:04:04.782 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x202001000000 00:04:04.782 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:04.782 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x2020011c0000 00:04:04.782 Starting DPDK initialization... 00:04:04.782 Starting SPDK post initialization... 00:04:04.782 SPDK NVMe probe 00:04:04.782 Attaching to 0000:03:00.0 00:04:04.782 Attaching to 0000:c9:00.0 00:04:04.782 Attached to 0000:c9:00.0 00:04:04.782 Attached to 0000:03:00.0 00:04:04.782 Cleaning up... 00:04:06.733 00:04:06.733 real 0m6.928s 00:04:06.733 user 0m1.081s 00:04:06.733 sys 0m0.157s 00:04:06.733 21:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.733 21:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:06.733 ************************************ 00:04:06.733 END TEST env_dpdk_post_init 00:04:06.733 ************************************ 00:04:06.733 21:05:00 -- env/env.sh@26 -- # uname 00:04:06.733 21:05:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.733 21:05:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.733 21:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.733 21:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.733 21:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:06.733 ************************************ 00:04:06.733 START TEST env_mem_callbacks 00:04:06.733 ************************************ 00:04:06.733 21:05:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.733 EAL: Detected CPU lcores: 128 00:04:06.733 EAL: Detected NUMA nodes: 2 00:04:06.733 EAL: Detected shared linkage of DPDK 00:04:06.734 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.734 EAL: Selected IOVA mode 'VA' 00:04:06.734 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.734 EAL: VFIO support initialized 00:04:06.734 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.734 00:04:06.734 00:04:06.734 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.734 http://cunit.sourceforge.net/ 00:04:06.734 00:04:06.734 00:04:06.734 Suite: memory 00:04:06.734 Test: test ... 00:04:06.734 register 0x200000200000 2097152 00:04:06.734 malloc 3145728 00:04:06.734 register 0x200000400000 4194304 00:04:06.734 buf 0x2000004fffc0 len 3145728 PASSED 00:04:06.734 malloc 64 00:04:06.734 buf 0x2000004ffec0 len 64 PASSED 00:04:06.734 malloc 4194304 00:04:06.734 register 0x200000800000 6291456 00:04:06.734 buf 0x2000009fffc0 len 4194304 PASSED 00:04:06.734 free 0x2000004fffc0 3145728 00:04:06.734 free 0x2000004ffec0 64 00:04:06.734 unregister 0x200000400000 4194304 PASSED 00:04:06.734 free 0x2000009fffc0 4194304 00:04:06.734 unregister 0x200000800000 6291456 PASSED 00:04:06.734 malloc 8388608 00:04:06.734 register 0x200000400000 10485760 00:04:06.734 buf 0x2000005fffc0 len 8388608 PASSED 00:04:06.734 free 0x2000005fffc0 8388608 00:04:06.734 unregister 0x200000400000 10485760 PASSED 00:04:06.734 passed 00:04:06.734 00:04:06.734 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.734 suites 1 1 n/a 0 0 00:04:06.734 tests 1 1 1 0 0 00:04:06.734 asserts 15 15 15 0 n/a 00:04:06.734 00:04:06.734 Elapsed time = 0.023 seconds 00:04:06.734 00:04:06.734 real 0m0.142s 00:04:06.734 user 0m0.056s 00:04:06.734 sys 0m0.085s 00:04:06.734 21:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.734 21:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:06.734 ************************************ 00:04:06.734 END TEST env_mem_callbacks 00:04:06.734 ************************************ 00:04:06.994 00:04:06.994 real 0m11.323s 00:04:06.994 user 0m4.134s 00:04:06.994 sys 0m1.416s 00:04:06.994 21:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.994 21:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:06.994 ************************************ 00:04:06.994 END TEST env 00:04:06.994 ************************************ 00:04:06.994 21:05:01 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.994 21:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.994 21:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.994 21:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:06.994 ************************************ 00:04:06.994 START TEST rpc 00:04:06.994 ************************************ 00:04:06.994 21:05:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.994 * Looking for test storage... 00:04:06.994 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:06.994 21:05:01 -- rpc/rpc.sh@65 -- # spdk_pid=1214870 00:04:06.994 21:05:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.994 21:05:01 -- rpc/rpc.sh@67 -- # waitforlisten 1214870 00:04:06.994 21:05:01 -- common/autotest_common.sh@817 -- # '[' -z 1214870 ']' 00:04:06.994 21:05:01 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.994 21:05:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.994 21:05:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.994 21:05:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.994 21:05:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.994 21:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 [2024-04-23 21:05:01.294567] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:07.253 [2024-04-23 21:05:01.294680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214870 ] 00:04:07.253 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.253 [2024-04-23 21:05:01.415128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.253 [2024-04-23 21:05:01.507553] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.253 [2024-04-23 21:05:01.507588] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1214870' to capture a snapshot of events at runtime. 00:04:07.253 [2024-04-23 21:05:01.507601] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.253 [2024-04-23 21:05:01.507610] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.253 [2024-04-23 21:05:01.507619] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1214870 for offline analysis/debug. 00:04:07.253 [2024-04-23 21:05:01.507649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.822 21:05:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:07.822 21:05:02 -- common/autotest_common.sh@850 -- # return 0 00:04:07.822 21:05:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:07.822 21:05:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:07.822 21:05:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.822 21:05:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.822 21:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.822 21:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.822 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 ************************************ 00:04:08.081 START TEST rpc_integrity 00:04:08.081 ************************************ 00:04:08.081 21:05:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:08.081 21:05:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.081 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.081 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.081 21:05:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.081 21:05:02 -- rpc/rpc.sh@13 -- # jq length 00:04:08.081 21:05:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.081 21:05:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.081 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.081 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.081 21:05:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.081 21:05:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.081 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.081 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.081 21:05:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.081 { 00:04:08.081 "name": "Malloc0", 00:04:08.081 "aliases": [ 00:04:08.081 "0aae4efd-9ec7-4abe-92b1-dce0a932308b" 00:04:08.081 ], 00:04:08.081 "product_name": "Malloc disk", 00:04:08.081 "block_size": 512, 00:04:08.081 "num_blocks": 16384, 00:04:08.081 "uuid": "0aae4efd-9ec7-4abe-92b1-dce0a932308b", 00:04:08.081 "assigned_rate_limits": { 00:04:08.081 "rw_ios_per_sec": 0, 00:04:08.081 "rw_mbytes_per_sec": 0, 00:04:08.081 "r_mbytes_per_sec": 0, 00:04:08.081 "w_mbytes_per_sec": 0 00:04:08.081 }, 00:04:08.081 "claimed": false, 00:04:08.081 "zoned": false, 00:04:08.081 "supported_io_types": { 00:04:08.081 "read": true, 00:04:08.081 "write": true, 00:04:08.081 "unmap": true, 00:04:08.081 "write_zeroes": true, 00:04:08.081 "flush": true, 00:04:08.081 "reset": true, 00:04:08.081 "compare": false, 00:04:08.081 "compare_and_write": false, 00:04:08.081 "abort": true, 00:04:08.081 "nvme_admin": false, 00:04:08.081 "nvme_io": false 00:04:08.081 }, 00:04:08.081 "memory_domains": [ 00:04:08.081 { 00:04:08.081 "dma_device_id": "system", 00:04:08.081 "dma_device_type": 1 00:04:08.081 }, 00:04:08.081 { 00:04:08.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.081 "dma_device_type": 2 00:04:08.081 } 00:04:08.081 ], 00:04:08.081 "driver_specific": {} 00:04:08.081 } 00:04:08.081 ]' 00:04:08.081 21:05:02 -- rpc/rpc.sh@17 -- # jq length 00:04:08.081 21:05:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.081 21:05:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.081 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.081 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 [2024-04-23 21:05:02.290206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.081 [2024-04-23 21:05:02.290249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.081 [2024-04-23 21:05:02.290274] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:04:08.081 [2024-04-23 21:05:02.290284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.081 [2024-04-23 21:05:02.292013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.081 [2024-04-23 21:05:02.292038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.081 Passthru0 00:04:08.081 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.081 21:05:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.081 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.081 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.081 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.082 21:05:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.082 { 00:04:08.082 "name": "Malloc0", 00:04:08.082 "aliases": [ 00:04:08.082 "0aae4efd-9ec7-4abe-92b1-dce0a932308b" 00:04:08.082 ], 00:04:08.082 "product_name": "Malloc disk", 00:04:08.082 "block_size": 512, 00:04:08.082 "num_blocks": 16384, 00:04:08.082 "uuid": "0aae4efd-9ec7-4abe-92b1-dce0a932308b", 00:04:08.082 "assigned_rate_limits": { 00:04:08.082 "rw_ios_per_sec": 0, 00:04:08.082 "rw_mbytes_per_sec": 0, 00:04:08.082 "r_mbytes_per_sec": 0, 00:04:08.082 "w_mbytes_per_sec": 0 00:04:08.082 }, 00:04:08.082 "claimed": true, 00:04:08.082 "claim_type": "exclusive_write", 00:04:08.082 "zoned": false, 00:04:08.082 "supported_io_types": { 00:04:08.082 "read": true, 00:04:08.082 "write": true, 00:04:08.082 "unmap": true, 00:04:08.082 "write_zeroes": true, 00:04:08.082 "flush": true, 00:04:08.082 "reset": true, 00:04:08.082 "compare": false, 00:04:08.082 "compare_and_write": false, 00:04:08.082 "abort": true, 00:04:08.082 "nvme_admin": false, 00:04:08.082 "nvme_io": false 00:04:08.082 }, 00:04:08.082 "memory_domains": [ 00:04:08.082 { 00:04:08.082 "dma_device_id": "system", 00:04:08.082 "dma_device_type": 1 00:04:08.082 }, 00:04:08.082 { 00:04:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.082 "dma_device_type": 2 00:04:08.082 } 00:04:08.082 ], 00:04:08.082 "driver_specific": {} 00:04:08.082 }, 00:04:08.082 { 00:04:08.082 "name": "Passthru0", 00:04:08.082 "aliases": [ 00:04:08.082 "a3ea3afe-e84d-583e-b778-3db8ec5462a3" 00:04:08.082 ], 00:04:08.082 "product_name": "passthru", 00:04:08.082 "block_size": 512, 00:04:08.082 "num_blocks": 16384, 00:04:08.082 "uuid": "a3ea3afe-e84d-583e-b778-3db8ec5462a3", 00:04:08.082 "assigned_rate_limits": { 00:04:08.082 "rw_ios_per_sec": 0, 00:04:08.082 "rw_mbytes_per_sec": 0, 00:04:08.082 "r_mbytes_per_sec": 0, 00:04:08.082 "w_mbytes_per_sec": 0 00:04:08.082 }, 00:04:08.082 "claimed": false, 00:04:08.082 "zoned": false, 00:04:08.082 "supported_io_types": { 00:04:08.082 "read": true, 00:04:08.082 "write": true, 00:04:08.082 "unmap": true, 00:04:08.082 "write_zeroes": true, 00:04:08.082 "flush": true, 00:04:08.082 "reset": true, 00:04:08.082 "compare": false, 00:04:08.082 "compare_and_write": false, 00:04:08.082 "abort": true, 00:04:08.082 "nvme_admin": false, 00:04:08.082 "nvme_io": false 00:04:08.082 }, 00:04:08.082 "memory_domains": [ 00:04:08.082 { 00:04:08.082 "dma_device_id": "system", 00:04:08.082 "dma_device_type": 1 00:04:08.082 }, 00:04:08.082 { 00:04:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.082 "dma_device_type": 2 00:04:08.082 } 00:04:08.082 ], 00:04:08.082 "driver_specific": { 00:04:08.082 "passthru": { 00:04:08.082 "name": "Passthru0", 00:04:08.082 "base_bdev_name": "Malloc0" 00:04:08.082 } 00:04:08.082 } 00:04:08.082 } 00:04:08.082 ]' 00:04:08.082 21:05:02 -- rpc/rpc.sh@21 -- # jq length 00:04:08.082 21:05:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.082 21:05:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.082 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.082 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.082 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.082 21:05:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.082 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.341 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.341 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.341 21:05:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.341 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.341 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.341 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.341 21:05:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.341 21:05:02 -- rpc/rpc.sh@26 -- # jq length 00:04:08.342 21:05:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.342 00:04:08.342 real 0m0.228s 00:04:08.342 user 0m0.117s 00:04:08.342 sys 0m0.037s 00:04:08.342 21:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.342 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 ************************************ 00:04:08.342 END TEST rpc_integrity 00:04:08.342 ************************************ 00:04:08.342 21:05:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.342 21:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.342 21:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.342 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 ************************************ 00:04:08.342 START TEST rpc_plugins 00:04:08.342 ************************************ 00:04:08.342 21:05:02 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:08.342 21:05:02 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.342 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.342 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.342 21:05:02 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.342 21:05:02 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.342 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.342 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.342 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.342 21:05:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.342 { 00:04:08.342 "name": "Malloc1", 00:04:08.342 "aliases": [ 00:04:08.342 "5dde03ce-ffe6-4f7f-b07d-d255be0d72f0" 00:04:08.342 ], 00:04:08.342 "product_name": "Malloc disk", 00:04:08.342 "block_size": 4096, 00:04:08.342 "num_blocks": 256, 00:04:08.342 "uuid": "5dde03ce-ffe6-4f7f-b07d-d255be0d72f0", 00:04:08.342 "assigned_rate_limits": { 00:04:08.342 "rw_ios_per_sec": 0, 00:04:08.342 "rw_mbytes_per_sec": 0, 00:04:08.342 "r_mbytes_per_sec": 0, 00:04:08.342 "w_mbytes_per_sec": 0 00:04:08.342 }, 00:04:08.342 "claimed": false, 00:04:08.342 "zoned": false, 00:04:08.342 "supported_io_types": { 00:04:08.342 "read": true, 00:04:08.342 "write": true, 00:04:08.342 "unmap": true, 00:04:08.342 "write_zeroes": true, 00:04:08.342 "flush": true, 00:04:08.342 "reset": true, 00:04:08.342 "compare": false, 00:04:08.342 "compare_and_write": false, 00:04:08.342 "abort": true, 00:04:08.342 "nvme_admin": false, 00:04:08.342 "nvme_io": false 00:04:08.342 }, 00:04:08.342 "memory_domains": [ 00:04:08.342 { 00:04:08.342 "dma_device_id": "system", 00:04:08.342 "dma_device_type": 1 00:04:08.342 }, 00:04:08.342 { 00:04:08.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.342 "dma_device_type": 2 00:04:08.342 } 00:04:08.342 ], 00:04:08.342 "driver_specific": {} 00:04:08.342 } 00:04:08.342 ]' 00:04:08.342 21:05:02 -- rpc/rpc.sh@32 -- # jq length 00:04:08.342 21:05:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.342 21:05:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.342 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.342 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.601 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.601 21:05:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.601 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.601 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.601 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.601 21:05:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.601 21:05:02 -- rpc/rpc.sh@36 -- # jq length 00:04:08.601 21:05:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.601 00:04:08.601 real 0m0.098s 00:04:08.601 user 0m0.056s 00:04:08.601 sys 0m0.012s 00:04:08.601 21:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.601 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.601 ************************************ 00:04:08.601 END TEST rpc_plugins 00:04:08.601 ************************************ 00:04:08.601 21:05:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.601 21:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.601 21:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.601 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.601 ************************************ 00:04:08.601 START TEST rpc_trace_cmd_test 00:04:08.601 ************************************ 00:04:08.601 21:05:02 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:08.602 21:05:02 -- rpc/rpc.sh@40 -- # local info 00:04:08.602 21:05:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.602 21:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.602 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.602 21:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.602 21:05:02 -- rpc/rpc.sh@42 -- # info='{ 00:04:08.602 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1214870", 00:04:08.602 "tpoint_group_mask": "0x8", 00:04:08.602 "iscsi_conn": { 00:04:08.602 "mask": "0x2", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "scsi": { 00:04:08.602 "mask": "0x4", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "bdev": { 00:04:08.602 "mask": "0x8", 00:04:08.602 "tpoint_mask": "0xffffffffffffffff" 00:04:08.602 }, 00:04:08.602 "nvmf_rdma": { 00:04:08.602 "mask": "0x10", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "nvmf_tcp": { 00:04:08.602 "mask": "0x20", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "ftl": { 00:04:08.602 "mask": "0x40", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "blobfs": { 00:04:08.602 "mask": "0x80", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "dsa": { 00:04:08.602 "mask": "0x200", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "thread": { 00:04:08.602 "mask": "0x400", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "nvme_pcie": { 00:04:08.602 "mask": "0x800", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "iaa": { 00:04:08.602 "mask": "0x1000", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "nvme_tcp": { 00:04:08.602 "mask": "0x2000", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "bdev_nvme": { 00:04:08.602 "mask": "0x4000", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 }, 00:04:08.602 "sock": { 00:04:08.602 "mask": "0x8000", 00:04:08.602 "tpoint_mask": "0x0" 00:04:08.602 } 00:04:08.602 }' 00:04:08.602 21:05:02 -- rpc/rpc.sh@43 -- # jq length 00:04:08.602 21:05:02 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.602 21:05:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.602 21:05:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.602 21:05:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.861 21:05:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.861 21:05:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.861 21:05:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.861 21:05:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.861 21:05:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.861 00:04:08.861 real 0m0.174s 00:04:08.861 user 0m0.137s 00:04:08.861 sys 0m0.028s 00:04:08.861 21:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.861 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 ************************************ 00:04:08.861 END TEST rpc_trace_cmd_test 00:04:08.861 ************************************ 00:04:08.861 21:05:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.861 21:05:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.861 21:05:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.861 21:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.861 21:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.861 21:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 ************************************ 00:04:08.861 START TEST rpc_daemon_integrity 00:04:08.861 ************************************ 00:04:08.861 21:05:03 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:08.861 21:05:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.861 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.861 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.861 21:05:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.861 21:05:03 -- rpc/rpc.sh@13 -- # jq length 00:04:08.861 21:05:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.861 21:05:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.861 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.861 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.861 21:05:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.861 21:05:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.861 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.861 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:08.861 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.861 21:05:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.861 { 00:04:08.861 "name": "Malloc2", 00:04:08.861 "aliases": [ 00:04:08.861 "692929de-57b5-4637-973e-411167e13956" 00:04:08.861 ], 00:04:08.861 "product_name": "Malloc disk", 00:04:08.861 "block_size": 512, 00:04:08.861 "num_blocks": 16384, 00:04:08.861 "uuid": "692929de-57b5-4637-973e-411167e13956", 00:04:08.861 "assigned_rate_limits": { 00:04:08.861 "rw_ios_per_sec": 0, 00:04:08.861 "rw_mbytes_per_sec": 0, 00:04:08.861 "r_mbytes_per_sec": 0, 00:04:08.861 "w_mbytes_per_sec": 0 00:04:08.861 }, 00:04:08.861 "claimed": false, 00:04:08.861 "zoned": false, 00:04:08.861 "supported_io_types": { 00:04:08.861 "read": true, 00:04:08.861 "write": true, 00:04:08.861 "unmap": true, 00:04:08.861 "write_zeroes": true, 00:04:08.861 "flush": true, 00:04:08.861 "reset": true, 00:04:08.861 "compare": false, 00:04:08.861 "compare_and_write": false, 00:04:08.861 "abort": true, 00:04:08.861 "nvme_admin": false, 00:04:08.861 "nvme_io": false 00:04:08.861 }, 00:04:08.861 "memory_domains": [ 00:04:08.861 { 00:04:08.861 "dma_device_id": "system", 00:04:08.861 "dma_device_type": 1 00:04:08.861 }, 00:04:08.861 { 00:04:08.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.862 "dma_device_type": 2 00:04:08.862 } 00:04:08.862 ], 00:04:08.862 "driver_specific": {} 00:04:08.862 } 00:04:08.862 ]' 00:04:08.862 21:05:03 -- rpc/rpc.sh@17 -- # jq length 00:04:09.122 21:05:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.122 21:05:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.122 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 [2024-04-23 21:05:03.153290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.122 [2024-04-23 21:05:03.153331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.122 [2024-04-23 21:05:03.153354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:04:09.122 [2024-04-23 21:05:03.153362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.122 [2024-04-23 21:05:03.155125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.122 [2024-04-23 21:05:03.155150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.122 Passthru0 00:04:09.122 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.122 21:05:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.122 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.122 21:05:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.122 { 00:04:09.122 "name": "Malloc2", 00:04:09.122 "aliases": [ 00:04:09.122 "692929de-57b5-4637-973e-411167e13956" 00:04:09.122 ], 00:04:09.122 "product_name": "Malloc disk", 00:04:09.122 "block_size": 512, 00:04:09.122 "num_blocks": 16384, 00:04:09.122 "uuid": "692929de-57b5-4637-973e-411167e13956", 00:04:09.122 "assigned_rate_limits": { 00:04:09.122 "rw_ios_per_sec": 0, 00:04:09.122 "rw_mbytes_per_sec": 0, 00:04:09.122 "r_mbytes_per_sec": 0, 00:04:09.122 "w_mbytes_per_sec": 0 00:04:09.122 }, 00:04:09.122 "claimed": true, 00:04:09.122 "claim_type": "exclusive_write", 00:04:09.122 "zoned": false, 00:04:09.122 "supported_io_types": { 00:04:09.122 "read": true, 00:04:09.122 "write": true, 00:04:09.122 "unmap": true, 00:04:09.122 "write_zeroes": true, 00:04:09.122 "flush": true, 00:04:09.122 "reset": true, 00:04:09.122 "compare": false, 00:04:09.122 "compare_and_write": false, 00:04:09.122 "abort": true, 00:04:09.122 "nvme_admin": false, 00:04:09.122 "nvme_io": false 00:04:09.122 }, 00:04:09.122 "memory_domains": [ 00:04:09.122 { 00:04:09.122 "dma_device_id": "system", 00:04:09.122 "dma_device_type": 1 00:04:09.122 }, 00:04:09.122 { 00:04:09.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.122 "dma_device_type": 2 00:04:09.122 } 00:04:09.122 ], 00:04:09.122 "driver_specific": {} 00:04:09.122 }, 00:04:09.122 { 00:04:09.122 "name": "Passthru0", 00:04:09.122 "aliases": [ 00:04:09.122 "5d582d02-47b7-5532-937c-1713682279ce" 00:04:09.122 ], 00:04:09.122 "product_name": "passthru", 00:04:09.122 "block_size": 512, 00:04:09.122 "num_blocks": 16384, 00:04:09.122 "uuid": "5d582d02-47b7-5532-937c-1713682279ce", 00:04:09.122 "assigned_rate_limits": { 00:04:09.122 "rw_ios_per_sec": 0, 00:04:09.122 "rw_mbytes_per_sec": 0, 00:04:09.122 "r_mbytes_per_sec": 0, 00:04:09.122 "w_mbytes_per_sec": 0 00:04:09.122 }, 00:04:09.122 "claimed": false, 00:04:09.122 "zoned": false, 00:04:09.122 "supported_io_types": { 00:04:09.122 "read": true, 00:04:09.122 "write": true, 00:04:09.122 "unmap": true, 00:04:09.122 "write_zeroes": true, 00:04:09.122 "flush": true, 00:04:09.122 "reset": true, 00:04:09.122 "compare": false, 00:04:09.122 "compare_and_write": false, 00:04:09.122 "abort": true, 00:04:09.122 "nvme_admin": false, 00:04:09.122 "nvme_io": false 00:04:09.122 }, 00:04:09.122 "memory_domains": [ 00:04:09.122 { 00:04:09.122 "dma_device_id": "system", 00:04:09.122 "dma_device_type": 1 00:04:09.122 }, 00:04:09.122 { 00:04:09.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.122 "dma_device_type": 2 00:04:09.122 } 00:04:09.122 ], 00:04:09.122 "driver_specific": { 00:04:09.122 "passthru": { 00:04:09.122 "name": "Passthru0", 00:04:09.122 "base_bdev_name": "Malloc2" 00:04:09.122 } 00:04:09.122 } 00:04:09.122 } 00:04:09.122 ]' 00:04:09.122 21:05:03 -- rpc/rpc.sh@21 -- # jq length 00:04:09.122 21:05:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.122 21:05:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.122 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.122 21:05:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.122 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.122 21:05:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.122 21:05:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 21:05:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.122 21:05:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.122 21:05:03 -- rpc/rpc.sh@26 -- # jq length 00:04:09.122 21:05:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.122 00:04:09.122 real 0m0.202s 00:04:09.122 user 0m0.107s 00:04:09.122 sys 0m0.029s 00:04:09.122 21:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.122 21:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 ************************************ 00:04:09.122 END TEST rpc_daemon_integrity 00:04:09.122 ************************************ 00:04:09.122 21:05:03 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.122 21:05:03 -- rpc/rpc.sh@84 -- # killprocess 1214870 00:04:09.122 21:05:03 -- common/autotest_common.sh@936 -- # '[' -z 1214870 ']' 00:04:09.122 21:05:03 -- common/autotest_common.sh@940 -- # kill -0 1214870 00:04:09.122 21:05:03 -- common/autotest_common.sh@941 -- # uname 00:04:09.122 21:05:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.122 21:05:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1214870 00:04:09.122 21:05:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:09.122 21:05:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:09.122 21:05:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1214870' 00:04:09.122 killing process with pid 1214870 00:04:09.122 21:05:03 -- common/autotest_common.sh@955 -- # kill 1214870 00:04:09.122 21:05:03 -- common/autotest_common.sh@960 -- # wait 1214870 00:04:10.061 00:04:10.061 real 0m3.010s 00:04:10.061 user 0m3.525s 00:04:10.061 sys 0m0.850s 00:04:10.061 21:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.061 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:10.061 ************************************ 00:04:10.061 END TEST rpc 00:04:10.061 ************************************ 00:04:10.061 21:05:04 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.061 21:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.061 21:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.061 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:10.061 ************************************ 00:04:10.061 START TEST skip_rpc 00:04:10.061 ************************************ 00:04:10.061 21:05:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.322 * Looking for test storage... 00:04:10.322 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.322 21:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.322 21:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.322 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:10.322 ************************************ 00:04:10.322 START TEST skip_rpc 00:04:10.322 ************************************ 00:04:10.322 21:05:04 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1215749 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.322 21:05:04 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.322 [2024-04-23 21:05:04.593450] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:10.322 [2024-04-23 21:05:04.593577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215749 ] 00:04:10.581 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.581 [2024-04-23 21:05:04.729052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.581 [2024-04-23 21:05:04.826708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.857 21:05:09 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.857 21:05:09 -- common/autotest_common.sh@638 -- # local es=0 00:04:15.857 21:05:09 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.857 21:05:09 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:15.857 21:05:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:15.857 21:05:09 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:15.857 21:05:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:15.857 21:05:09 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:15.857 21:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:15.857 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:15.857 21:05:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:15.857 21:05:09 -- common/autotest_common.sh@641 -- # es=1 00:04:15.857 21:05:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:15.857 21:05:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:15.857 21:05:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:15.857 21:05:09 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.857 21:05:09 -- rpc/skip_rpc.sh@23 -- # killprocess 1215749 00:04:15.857 21:05:09 -- common/autotest_common.sh@936 -- # '[' -z 1215749 ']' 00:04:15.857 21:05:09 -- common/autotest_common.sh@940 -- # kill -0 1215749 00:04:15.857 21:05:09 -- common/autotest_common.sh@941 -- # uname 00:04:15.857 21:05:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:15.857 21:05:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1215749 00:04:15.857 21:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:15.857 21:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:15.857 21:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1215749' 00:04:15.857 killing process with pid 1215749 00:04:15.857 21:05:09 -- common/autotest_common.sh@955 -- # kill 1215749 00:04:15.857 21:05:09 -- common/autotest_common.sh@960 -- # wait 1215749 00:04:16.425 00:04:16.425 real 0m5.923s 00:04:16.425 user 0m5.577s 00:04:16.425 sys 0m0.362s 00:04:16.425 21:05:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.425 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:16.425 ************************************ 00:04:16.425 END TEST skip_rpc 00:04:16.425 ************************************ 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.425 21:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.425 21:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.425 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:16.425 ************************************ 00:04:16.425 START TEST skip_rpc_with_json 00:04:16.425 ************************************ 00:04:16.425 21:05:10 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1216921 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1216921 00:04:16.425 21:05:10 -- common/autotest_common.sh@817 -- # '[' -z 1216921 ']' 00:04:16.425 21:05:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.425 21:05:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:16.425 21:05:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.425 21:05:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:16.425 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:16.425 21:05:10 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.425 [2024-04-23 21:05:10.651242] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:16.426 [2024-04-23 21:05:10.651380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216921 ] 00:04:16.685 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.685 [2024-04-23 21:05:10.789198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.685 [2024-04-23 21:05:10.881927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.253 21:05:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:17.253 21:05:11 -- common/autotest_common.sh@850 -- # return 0 00:04:17.253 21:05:11 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:17.253 21:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.253 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.253 [2024-04-23 21:05:11.360524] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:17.253 request: 00:04:17.253 { 00:04:17.253 "trtype": "tcp", 00:04:17.253 "method": "nvmf_get_transports", 00:04:17.253 "req_id": 1 00:04:17.253 } 00:04:17.253 Got JSON-RPC error response 00:04:17.253 response: 00:04:17.253 { 00:04:17.253 "code": -19, 00:04:17.253 "message": "No such device" 00:04:17.253 } 00:04:17.253 21:05:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:17.253 21:05:11 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:17.253 21:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.253 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.253 [2024-04-23 21:05:11.368652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.253 21:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.253 21:05:11 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:17.253 21:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.253 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:17.253 21:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.253 21:05:11 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:17.253 { 00:04:17.253 "subsystems": [ 00:04:17.253 { 00:04:17.253 "subsystem": "keyring", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "iobuf", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "iobuf_set_options", 00:04:17.253 "params": { 00:04:17.253 "small_pool_count": 8192, 00:04:17.253 "large_pool_count": 1024, 00:04:17.253 "small_bufsize": 8192, 00:04:17.253 "large_bufsize": 135168 00:04:17.253 } 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "sock", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "sock_impl_set_options", 00:04:17.253 "params": { 00:04:17.253 "impl_name": "posix", 00:04:17.253 "recv_buf_size": 2097152, 00:04:17.253 "send_buf_size": 2097152, 00:04:17.253 "enable_recv_pipe": true, 00:04:17.253 "enable_quickack": false, 00:04:17.253 "enable_placement_id": 0, 00:04:17.253 "enable_zerocopy_send_server": true, 00:04:17.253 "enable_zerocopy_send_client": false, 00:04:17.253 "zerocopy_threshold": 0, 00:04:17.253 "tls_version": 0, 00:04:17.253 "enable_ktls": false 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "sock_impl_set_options", 00:04:17.253 "params": { 00:04:17.253 "impl_name": "ssl", 00:04:17.253 "recv_buf_size": 4096, 00:04:17.253 "send_buf_size": 4096, 00:04:17.253 "enable_recv_pipe": true, 00:04:17.253 "enable_quickack": false, 00:04:17.253 "enable_placement_id": 0, 00:04:17.253 "enable_zerocopy_send_server": true, 00:04:17.253 "enable_zerocopy_send_client": false, 00:04:17.253 "zerocopy_threshold": 0, 00:04:17.253 "tls_version": 0, 00:04:17.253 "enable_ktls": false 00:04:17.253 } 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "vmd", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "accel", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "accel_set_options", 00:04:17.253 "params": { 00:04:17.253 "small_cache_size": 128, 00:04:17.253 "large_cache_size": 16, 00:04:17.253 "task_count": 2048, 00:04:17.253 "sequence_count": 2048, 00:04:17.253 "buf_count": 2048 00:04:17.253 } 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "bdev", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "bdev_set_options", 00:04:17.253 "params": { 00:04:17.253 "bdev_io_pool_size": 65535, 00:04:17.253 "bdev_io_cache_size": 256, 00:04:17.253 "bdev_auto_examine": true, 00:04:17.253 "iobuf_small_cache_size": 128, 00:04:17.253 "iobuf_large_cache_size": 16 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "bdev_raid_set_options", 00:04:17.253 "params": { 00:04:17.253 "process_window_size_kb": 1024 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "bdev_iscsi_set_options", 00:04:17.253 "params": { 00:04:17.253 "timeout_sec": 30 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "bdev_nvme_set_options", 00:04:17.253 "params": { 00:04:17.253 "action_on_timeout": "none", 00:04:17.253 "timeout_us": 0, 00:04:17.253 "timeout_admin_us": 0, 00:04:17.253 "keep_alive_timeout_ms": 10000, 00:04:17.253 "arbitration_burst": 0, 00:04:17.253 "low_priority_weight": 0, 00:04:17.253 "medium_priority_weight": 0, 00:04:17.253 "high_priority_weight": 0, 00:04:17.253 "nvme_adminq_poll_period_us": 10000, 00:04:17.253 "nvme_ioq_poll_period_us": 0, 00:04:17.253 "io_queue_requests": 0, 00:04:17.253 "delay_cmd_submit": true, 00:04:17.253 "transport_retry_count": 4, 00:04:17.253 "bdev_retry_count": 3, 00:04:17.253 "transport_ack_timeout": 0, 00:04:17.253 "ctrlr_loss_timeout_sec": 0, 00:04:17.253 "reconnect_delay_sec": 0, 00:04:17.253 "fast_io_fail_timeout_sec": 0, 00:04:17.253 "disable_auto_failback": false, 00:04:17.253 "generate_uuids": false, 00:04:17.253 "transport_tos": 0, 00:04:17.253 "nvme_error_stat": false, 00:04:17.253 "rdma_srq_size": 0, 00:04:17.253 "io_path_stat": false, 00:04:17.253 "allow_accel_sequence": false, 00:04:17.253 "rdma_max_cq_size": 0, 00:04:17.253 "rdma_cm_event_timeout_ms": 0, 00:04:17.253 "dhchap_digests": [ 00:04:17.253 "sha256", 00:04:17.253 "sha384", 00:04:17.253 "sha512" 00:04:17.253 ], 00:04:17.253 "dhchap_dhgroups": [ 00:04:17.253 "null", 00:04:17.253 "ffdhe2048", 00:04:17.253 "ffdhe3072", 00:04:17.253 "ffdhe4096", 00:04:17.253 "ffdhe6144", 00:04:17.253 "ffdhe8192" 00:04:17.253 ] 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "bdev_nvme_set_hotplug", 00:04:17.253 "params": { 00:04:17.253 "period_us": 100000, 00:04:17.253 "enable": false 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "bdev_wait_for_examine" 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "scsi", 00:04:17.253 "config": null 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "scheduler", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "framework_set_scheduler", 00:04:17.253 "params": { 00:04:17.253 "name": "static" 00:04:17.253 } 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "vhost_scsi", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "vhost_blk", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "ublk", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "nbd", 00:04:17.253 "config": [] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "nvmf", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "nvmf_set_config", 00:04:17.253 "params": { 00:04:17.253 "discovery_filter": "match_any", 00:04:17.253 "admin_cmd_passthru": { 00:04:17.253 "identify_ctrlr": false 00:04:17.253 } 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "nvmf_set_max_subsystems", 00:04:17.253 "params": { 00:04:17.253 "max_subsystems": 1024 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "nvmf_set_crdt", 00:04:17.253 "params": { 00:04:17.253 "crdt1": 0, 00:04:17.253 "crdt2": 0, 00:04:17.253 "crdt3": 0 00:04:17.253 } 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "method": "nvmf_create_transport", 00:04:17.253 "params": { 00:04:17.253 "trtype": "TCP", 00:04:17.253 "max_queue_depth": 128, 00:04:17.253 "max_io_qpairs_per_ctrlr": 127, 00:04:17.253 "in_capsule_data_size": 4096, 00:04:17.253 "max_io_size": 131072, 00:04:17.253 "io_unit_size": 131072, 00:04:17.253 "max_aq_depth": 128, 00:04:17.253 "num_shared_buffers": 511, 00:04:17.253 "buf_cache_size": 4294967295, 00:04:17.253 "dif_insert_or_strip": false, 00:04:17.253 "zcopy": false, 00:04:17.253 "c2h_success": true, 00:04:17.253 "sock_priority": 0, 00:04:17.253 "abort_timeout_sec": 1, 00:04:17.253 "ack_timeout": 0, 00:04:17.253 "data_wr_pool_size": 0 00:04:17.253 } 00:04:17.253 } 00:04:17.253 ] 00:04:17.253 }, 00:04:17.253 { 00:04:17.253 "subsystem": "iscsi", 00:04:17.253 "config": [ 00:04:17.253 { 00:04:17.253 "method": "iscsi_set_options", 00:04:17.253 "params": { 00:04:17.253 "node_base": "iqn.2016-06.io.spdk", 00:04:17.253 "max_sessions": 128, 00:04:17.253 "max_connections_per_session": 2, 00:04:17.254 "max_queue_depth": 64, 00:04:17.254 "default_time2wait": 2, 00:04:17.254 "default_time2retain": 20, 00:04:17.254 "first_burst_length": 8192, 00:04:17.254 "immediate_data": true, 00:04:17.254 "allow_duplicated_isid": false, 00:04:17.254 "error_recovery_level": 0, 00:04:17.254 "nop_timeout": 60, 00:04:17.254 "nop_in_interval": 30, 00:04:17.254 "disable_chap": false, 00:04:17.254 "require_chap": false, 00:04:17.254 "mutual_chap": false, 00:04:17.254 "chap_group": 0, 00:04:17.254 "max_large_datain_per_connection": 64, 00:04:17.254 "max_r2t_per_connection": 4, 00:04:17.254 "pdu_pool_size": 36864, 00:04:17.254 "immediate_data_pool_size": 16384, 00:04:17.254 "data_out_pool_size": 2048 00:04:17.254 } 00:04:17.254 } 00:04:17.254 ] 00:04:17.254 } 00:04:17.254 ] 00:04:17.254 } 00:04:17.254 21:05:11 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.254 21:05:11 -- rpc/skip_rpc.sh@40 -- # killprocess 1216921 00:04:17.254 21:05:11 -- common/autotest_common.sh@936 -- # '[' -z 1216921 ']' 00:04:17.254 21:05:11 -- common/autotest_common.sh@940 -- # kill -0 1216921 00:04:17.254 21:05:11 -- common/autotest_common.sh@941 -- # uname 00:04:17.254 21:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.254 21:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216921 00:04:17.512 21:05:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:17.512 21:05:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:17.512 21:05:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216921' 00:04:17.512 killing process with pid 1216921 00:04:17.512 21:05:11 -- common/autotest_common.sh@955 -- # kill 1216921 00:04:17.512 21:05:11 -- common/autotest_common.sh@960 -- # wait 1216921 00:04:18.448 21:05:12 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1217485 00:04:18.448 21:05:12 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:18.448 21:05:12 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:23.716 21:05:17 -- rpc/skip_rpc.sh@50 -- # killprocess 1217485 00:04:23.716 21:05:17 -- common/autotest_common.sh@936 -- # '[' -z 1217485 ']' 00:04:23.716 21:05:17 -- common/autotest_common.sh@940 -- # kill -0 1217485 00:04:23.716 21:05:17 -- common/autotest_common.sh@941 -- # uname 00:04:23.716 21:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.716 21:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1217485 00:04:23.716 21:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.716 21:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.716 21:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1217485' 00:04:23.716 killing process with pid 1217485 00:04:23.716 21:05:17 -- common/autotest_common.sh@955 -- # kill 1217485 00:04:23.716 21:05:17 -- common/autotest_common.sh@960 -- # wait 1217485 00:04:24.283 21:05:18 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:24.283 21:05:18 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:24.283 00:04:24.283 real 0m7.795s 00:04:24.283 user 0m7.409s 00:04:24.283 sys 0m0.710s 00:04:24.283 21:05:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.283 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.283 ************************************ 00:04:24.283 END TEST skip_rpc_with_json 00:04:24.283 ************************************ 00:04:24.283 21:05:18 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.283 21:05:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.283 21:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.284 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.284 ************************************ 00:04:24.284 START TEST skip_rpc_with_delay 00:04:24.284 ************************************ 00:04:24.284 21:05:18 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:24.284 21:05:18 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.284 21:05:18 -- common/autotest_common.sh@638 -- # local es=0 00:04:24.284 21:05:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.284 21:05:18 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.284 21:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.284 21:05:18 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.284 21:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.284 21:05:18 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.284 21:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.284 21:05:18 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.284 21:05:18 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:24.284 21:05:18 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.284 [2024-04-23 21:05:18.543480] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:24.284 [2024-04-23 21:05:18.543612] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:24.544 21:05:18 -- common/autotest_common.sh@641 -- # es=1 00:04:24.544 21:05:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:24.544 21:05:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:24.544 21:05:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:24.544 00:04:24.544 real 0m0.127s 00:04:24.544 user 0m0.072s 00:04:24.544 sys 0m0.054s 00:04:24.544 21:05:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.544 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.544 ************************************ 00:04:24.544 END TEST skip_rpc_with_delay 00:04:24.544 ************************************ 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@77 -- # uname 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:24.544 21:05:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.544 21:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.544 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.544 ************************************ 00:04:24.544 START TEST exit_on_failed_rpc_init 00:04:24.544 ************************************ 00:04:24.544 21:05:18 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1218737 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1218737 00:04:24.544 21:05:18 -- common/autotest_common.sh@817 -- # '[' -z 1218737 ']' 00:04:24.544 21:05:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.544 21:05:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.544 21:05:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.544 21:05:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.544 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:24.544 21:05:18 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.544 [2024-04-23 21:05:18.773722] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:24.544 [2024-04-23 21:05:18.773838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218737 ] 00:04:24.804 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.804 [2024-04-23 21:05:18.891022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.804 [2024-04-23 21:05:18.986781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.371 21:05:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.371 21:05:19 -- common/autotest_common.sh@850 -- # return 0 00:04:25.371 21:05:19 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.371 21:05:19 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.371 21:05:19 -- common/autotest_common.sh@638 -- # local es=0 00:04:25.371 21:05:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.371 21:05:19 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.371 21:05:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.371 21:05:19 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.371 21:05:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.371 21:05:19 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.371 21:05:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.371 21:05:19 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.371 21:05:19 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:25.371 21:05:19 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.371 [2024-04-23 21:05:19.550992] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:25.371 [2024-04-23 21:05:19.551135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218755 ] 00:04:25.371 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.630 [2024-04-23 21:05:19.685608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.630 [2024-04-23 21:05:19.777305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.630 [2024-04-23 21:05:19.777392] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:25.630 [2024-04-23 21:05:19.777407] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:25.630 [2024-04-23 21:05:19.777417] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:25.898 21:05:19 -- common/autotest_common.sh@641 -- # es=234 00:04:25.899 21:05:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:25.899 21:05:19 -- common/autotest_common.sh@650 -- # es=106 00:04:25.899 21:05:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:25.899 21:05:19 -- common/autotest_common.sh@658 -- # es=1 00:04:25.899 21:05:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:25.899 21:05:19 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:25.899 21:05:19 -- rpc/skip_rpc.sh@70 -- # killprocess 1218737 00:04:25.899 21:05:19 -- common/autotest_common.sh@936 -- # '[' -z 1218737 ']' 00:04:25.899 21:05:19 -- common/autotest_common.sh@940 -- # kill -0 1218737 00:04:25.899 21:05:19 -- common/autotest_common.sh@941 -- # uname 00:04:25.899 21:05:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.899 21:05:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1218737 00:04:25.899 21:05:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.899 21:05:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.899 21:05:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1218737' 00:04:25.899 killing process with pid 1218737 00:04:25.899 21:05:19 -- common/autotest_common.sh@955 -- # kill 1218737 00:04:25.899 21:05:19 -- common/autotest_common.sh@960 -- # wait 1218737 00:04:26.840 00:04:26.840 real 0m2.135s 00:04:26.840 user 0m2.328s 00:04:26.840 sys 0m0.578s 00:04:26.840 21:05:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.840 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:26.840 ************************************ 00:04:26.840 END TEST exit_on_failed_rpc_init 00:04:26.840 ************************************ 00:04:26.840 21:05:20 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:26.840 00:04:26.840 real 0m16.568s 00:04:26.840 user 0m15.574s 00:04:26.840 sys 0m2.079s 00:04:26.840 21:05:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.840 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:26.840 ************************************ 00:04:26.840 END TEST skip_rpc 00:04:26.840 ************************************ 00:04:26.840 21:05:20 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.840 21:05:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.840 21:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.840 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:26.840 ************************************ 00:04:26.840 START TEST rpc_client 00:04:26.840 ************************************ 00:04:26.840 21:05:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.840 * Looking for test storage... 00:04:26.840 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:26.840 21:05:21 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:27.099 OK 00:04:27.099 21:05:21 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:27.099 00:04:27.099 real 0m0.133s 00:04:27.099 user 0m0.057s 00:04:27.099 sys 0m0.081s 00:04:27.099 21:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.099 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.099 ************************************ 00:04:27.099 END TEST rpc_client 00:04:27.099 ************************************ 00:04:27.099 21:05:21 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:27.099 21:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.099 21:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.099 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.099 ************************************ 00:04:27.099 START TEST json_config 00:04:27.099 ************************************ 00:04:27.099 21:05:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:27.099 21:05:21 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:27.099 21:05:21 -- nvmf/common.sh@7 -- # uname -s 00:04:27.099 21:05:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.099 21:05:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.099 21:05:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.099 21:05:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.099 21:05:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.099 21:05:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.099 21:05:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.099 21:05:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.099 21:05:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.099 21:05:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.099 21:05:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:27.099 21:05:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:27.099 21:05:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.099 21:05:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.099 21:05:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.099 21:05:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.099 21:05:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:27.099 21:05:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.099 21:05:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.099 21:05:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.099 21:05:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.099 21:05:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.099 21:05:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.099 21:05:21 -- paths/export.sh@5 -- # export PATH 00:04:27.099 21:05:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.099 21:05:21 -- nvmf/common.sh@47 -- # : 0 00:04:27.099 21:05:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.099 21:05:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.099 21:05:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.099 21:05:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.099 21:05:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.099 21:05:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.099 21:05:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.099 21:05:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.099 21:05:21 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:04:27.099 21:05:21 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:27.099 21:05:21 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:27.099 21:05:21 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:27.099 21:05:21 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.099 21:05:21 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:27.099 21:05:21 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:27.099 21:05:21 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:27.099 21:05:21 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:27.099 21:05:21 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:27.099 21:05:21 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:27.099 21:05:21 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:27.099 21:05:21 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:27.099 21:05:21 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:27.099 21:05:21 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.099 21:05:21 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:27.099 INFO: JSON configuration test init 00:04:27.099 21:05:21 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:27.099 21:05:21 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:27.099 21:05:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:27.099 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.099 21:05:21 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:27.099 21:05:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:27.100 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.100 21:05:21 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:27.100 21:05:21 -- json_config/common.sh@9 -- # local app=target 00:04:27.100 21:05:21 -- json_config/common.sh@10 -- # shift 00:04:27.100 21:05:21 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.100 21:05:21 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.100 21:05:21 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.100 21:05:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.100 21:05:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.100 21:05:21 -- json_config/common.sh@22 -- # app_pid["$app"]=1219302 00:04:27.100 21:05:21 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.100 Waiting for target to run... 00:04:27.100 21:05:21 -- json_config/common.sh@25 -- # waitforlisten 1219302 /var/tmp/spdk_tgt.sock 00:04:27.100 21:05:21 -- common/autotest_common.sh@817 -- # '[' -z 1219302 ']' 00:04:27.100 21:05:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.100 21:05:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.100 21:05:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.100 21:05:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.100 21:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.100 21:05:21 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:27.359 [2024-04-23 21:05:21.453269] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:27.359 [2024-04-23 21:05:21.453407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219302 ] 00:04:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.927 [2024-04-23 21:05:21.938221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.927 [2024-04-23 21:05:22.028938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.927 21:05:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:27.927 21:05:22 -- common/autotest_common.sh@850 -- # return 0 00:04:27.927 21:05:22 -- json_config/common.sh@26 -- # echo '' 00:04:27.927 00:04:27.927 21:05:22 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:27.927 21:05:22 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:27.927 21:05:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:27.927 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:27.927 21:05:22 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:27.927 21:05:22 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:27.927 21:05:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:27.927 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:27.927 21:05:22 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:27.927 21:05:22 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:27.927 21:05:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.303 21:05:23 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:29.303 21:05:23 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.303 21:05:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.303 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:29.303 21:05:23 -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.303 21:05:23 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.303 21:05:23 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.303 21:05:23 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:29.303 21:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.303 21:05:23 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:29.303 21:05:23 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:29.303 21:05:23 -- json_config/json_config.sh@48 -- # local get_types 00:04:29.304 21:05:23 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:29.304 21:05:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:29.304 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:29.304 21:05:23 -- json_config/json_config.sh@55 -- # return 0 00:04:29.304 21:05:23 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:29.304 21:05:23 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:29.304 21:05:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.304 21:05:23 -- common/autotest_common.sh@10 -- # set +x 00:04:29.304 21:05:23 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.304 21:05:23 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:29.304 21:05:23 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.304 21:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.562 MallocForNvmf0 00:04:29.562 21:05:23 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.562 21:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.562 MallocForNvmf1 00:04:29.562 21:05:23 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.562 21:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.820 [2024-04-23 21:05:23.959718] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.820 21:05:23 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:29.820 21:05:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.078 21:05:24 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.078 21:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.078 21:05:24 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.078 21:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.337 21:05:24 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.337 21:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.337 [2024-04-23 21:05:24.556304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.337 21:05:24 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:30.337 21:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.337 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:30.596 21:05:24 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:30.596 21:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.596 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:30.596 21:05:24 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:30.596 21:05:24 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.596 21:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.596 MallocBdevForConfigChangeCheck 00:04:30.596 21:05:24 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:30.596 21:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.596 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:30.596 21:05:24 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:30.596 21:05:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.855 21:05:25 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:30.855 INFO: shutting down applications... 00:04:30.855 21:05:25 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:30.855 21:05:25 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:30.855 21:05:25 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:30.855 21:05:25 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.464 Calling clear_iscsi_subsystem 00:04:33.464 Calling clear_nvmf_subsystem 00:04:33.464 Calling clear_nbd_subsystem 00:04:33.464 Calling clear_ublk_subsystem 00:04:33.464 Calling clear_vhost_blk_subsystem 00:04:33.464 Calling clear_vhost_scsi_subsystem 00:04:33.464 Calling clear_bdev_subsystem 00:04:33.464 21:05:27 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.464 21:05:27 -- json_config/json_config.sh@343 -- # count=100 00:04:33.464 21:05:27 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:33.464 21:05:27 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.464 21:05:27 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.464 21:05:27 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.464 21:05:27 -- json_config/json_config.sh@345 -- # break 00:04:33.464 21:05:27 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:33.464 21:05:27 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:33.464 21:05:27 -- json_config/common.sh@31 -- # local app=target 00:04:33.464 21:05:27 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.464 21:05:27 -- json_config/common.sh@35 -- # [[ -n 1219302 ]] 00:04:33.464 21:05:27 -- json_config/common.sh@38 -- # kill -SIGINT 1219302 00:04:33.464 21:05:27 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.464 21:05:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.464 21:05:27 -- json_config/common.sh@41 -- # kill -0 1219302 00:04:33.464 21:05:27 -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.725 21:05:27 -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.725 21:05:27 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.725 21:05:27 -- json_config/common.sh@41 -- # kill -0 1219302 00:04:33.725 21:05:27 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.725 21:05:27 -- json_config/common.sh@43 -- # break 00:04:33.725 21:05:27 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.725 21:05:27 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.725 SPDK target shutdown done 00:04:33.725 21:05:27 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:33.725 INFO: relaunching applications... 00:04:33.725 21:05:27 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.725 21:05:27 -- json_config/common.sh@9 -- # local app=target 00:04:33.725 21:05:27 -- json_config/common.sh@10 -- # shift 00:04:33.725 21:05:27 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.725 21:05:27 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.725 21:05:27 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.725 21:05:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.725 21:05:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.725 21:05:27 -- json_config/common.sh@22 -- # app_pid["$app"]=1220780 00:04:33.725 21:05:27 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.725 Waiting for target to run... 00:04:33.725 21:05:27 -- json_config/common.sh@25 -- # waitforlisten 1220780 /var/tmp/spdk_tgt.sock 00:04:33.725 21:05:27 -- common/autotest_common.sh@817 -- # '[' -z 1220780 ']' 00:04:33.725 21:05:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.725 21:05:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:33.725 21:05:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.725 21:05:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:33.725 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:33.725 21:05:27 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.725 [2024-04-23 21:05:27.950203] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:33.725 [2024-04-23 21:05:27.950346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220780 ] 00:04:33.984 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.243 [2024-04-23 21:05:28.447245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.503 [2024-04-23 21:05:28.540802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.439 [2024-04-23 21:05:29.656417] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.439 [2024-04-23 21:05:29.688704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:35.698 21:05:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:35.698 21:05:29 -- common/autotest_common.sh@850 -- # return 0 00:04:35.698 21:05:29 -- json_config/common.sh@26 -- # echo '' 00:04:35.698 00:04:35.698 21:05:29 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:35.698 21:05:29 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:35.698 INFO: Checking if target configuration is the same... 00:04:35.698 21:05:29 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.698 21:05:29 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:35.698 21:05:29 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.698 + '[' 2 -ne 2 ']' 00:04:35.698 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.698 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:35.698 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:35.698 +++ basename /dev/fd/62 00:04:35.698 ++ mktemp /tmp/62.XXX 00:04:35.698 + tmp_file_1=/tmp/62.UqB 00:04:35.698 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.698 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.698 + tmp_file_2=/tmp/spdk_tgt_config.json.Xmz 00:04:35.698 + ret=0 00:04:35.698 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.957 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.957 + diff -u /tmp/62.UqB /tmp/spdk_tgt_config.json.Xmz 00:04:35.957 + echo 'INFO: JSON config files are the same' 00:04:35.957 INFO: JSON config files are the same 00:04:35.957 + rm /tmp/62.UqB /tmp/spdk_tgt_config.json.Xmz 00:04:35.957 + exit 0 00:04:35.957 21:05:30 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:35.957 21:05:30 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.957 INFO: changing configuration and checking if this can be detected... 00:04:35.957 21:05:30 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.957 21:05:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.957 21:05:30 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.957 21:05:30 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:35.957 21:05:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.957 + '[' 2 -ne 2 ']' 00:04:35.957 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:35.957 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:04:35.957 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:04:35.957 +++ basename /dev/fd/62 00:04:35.957 ++ mktemp /tmp/62.XXX 00:04:35.957 + tmp_file_1=/tmp/62.3CO 00:04:35.957 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.957 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.957 + tmp_file_2=/tmp/spdk_tgt_config.json.ryg 00:04:35.957 + ret=0 00:04:35.957 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.215 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.474 + diff -u /tmp/62.3CO /tmp/spdk_tgt_config.json.ryg 00:04:36.474 + ret=1 00:04:36.474 + echo '=== Start of file: /tmp/62.3CO ===' 00:04:36.474 + cat /tmp/62.3CO 00:04:36.474 + echo '=== End of file: /tmp/62.3CO ===' 00:04:36.474 + echo '' 00:04:36.474 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ryg ===' 00:04:36.474 + cat /tmp/spdk_tgt_config.json.ryg 00:04:36.474 + echo '=== End of file: /tmp/spdk_tgt_config.json.ryg ===' 00:04:36.474 + echo '' 00:04:36.474 + rm /tmp/62.3CO /tmp/spdk_tgt_config.json.ryg 00:04:36.474 + exit 1 00:04:36.474 21:05:30 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:36.474 INFO: configuration change detected. 00:04:36.474 21:05:30 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:36.474 21:05:30 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:36.474 21:05:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.474 21:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.474 21:05:30 -- json_config/json_config.sh@307 -- # local ret=0 00:04:36.474 21:05:30 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:36.474 21:05:30 -- json_config/json_config.sh@317 -- # [[ -n 1220780 ]] 00:04:36.474 21:05:30 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:36.474 21:05:30 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.474 21:05:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.474 21:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.474 21:05:30 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:36.474 21:05:30 -- json_config/json_config.sh@193 -- # uname -s 00:04:36.474 21:05:30 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:36.474 21:05:30 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:36.474 21:05:30 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:36.474 21:05:30 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.474 21:05:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:36.474 21:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.474 21:05:30 -- json_config/json_config.sh@323 -- # killprocess 1220780 00:04:36.474 21:05:30 -- common/autotest_common.sh@936 -- # '[' -z 1220780 ']' 00:04:36.474 21:05:30 -- common/autotest_common.sh@940 -- # kill -0 1220780 00:04:36.474 21:05:30 -- common/autotest_common.sh@941 -- # uname 00:04:36.474 21:05:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.474 21:05:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1220780 00:04:36.474 21:05:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.474 21:05:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.474 21:05:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1220780' 00:04:36.474 killing process with pid 1220780 00:04:36.474 21:05:30 -- common/autotest_common.sh@955 -- # kill 1220780 00:04:36.474 21:05:30 -- common/autotest_common.sh@960 -- # wait 1220780 00:04:37.850 21:05:31 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.850 21:05:31 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:37.850 21:05:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:37.850 21:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:37.850 21:05:31 -- json_config/json_config.sh@328 -- # return 0 00:04:37.850 21:05:31 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:37.850 INFO: Success 00:04:37.850 00:04:37.850 real 0m10.729s 00:04:37.850 user 0m11.078s 00:04:37.850 sys 0m2.328s 00:04:37.850 21:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:37.850 21:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:37.850 ************************************ 00:04:37.850 END TEST json_config 00:04:37.850 ************************************ 00:04:37.850 21:05:32 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.850 21:05:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.850 21:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.850 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.850 ************************************ 00:04:37.850 START TEST json_config_extra_key 00:04:37.850 ************************************ 00:04:38.110 21:05:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.110 21:05:32 -- nvmf/common.sh@7 -- # uname -s 00:04:38.110 21:05:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.110 21:05:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.110 21:05:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.110 21:05:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.110 21:05:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.110 21:05:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.110 21:05:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.110 21:05:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.110 21:05:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.110 21:05:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.110 21:05:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:38.110 21:05:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:04:38.110 21:05:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.110 21:05:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.110 21:05:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.110 21:05:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.110 21:05:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:38.110 21:05:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.110 21:05:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.110 21:05:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.110 21:05:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.110 21:05:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.110 21:05:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.110 21:05:32 -- paths/export.sh@5 -- # export PATH 00:04:38.110 21:05:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.110 21:05:32 -- nvmf/common.sh@47 -- # : 0 00:04:38.110 21:05:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:38.110 21:05:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:38.110 21:05:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.110 21:05:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.110 21:05:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.110 21:05:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:38.110 21:05:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:38.110 21:05:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.110 INFO: launching applications... 00:04:38.110 21:05:32 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.110 21:05:32 -- json_config/common.sh@9 -- # local app=target 00:04:38.110 21:05:32 -- json_config/common.sh@10 -- # shift 00:04:38.110 21:05:32 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.110 21:05:32 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.110 21:05:32 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.110 21:05:32 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.110 21:05:32 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.110 21:05:32 -- json_config/common.sh@22 -- # app_pid["$app"]=1221770 00:04:38.110 21:05:32 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.111 Waiting for target to run... 00:04:38.111 21:05:32 -- json_config/common.sh@25 -- # waitforlisten 1221770 /var/tmp/spdk_tgt.sock 00:04:38.111 21:05:32 -- common/autotest_common.sh@817 -- # '[' -z 1221770 ']' 00:04:38.111 21:05:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.111 21:05:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.111 21:05:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.111 21:05:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.111 21:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:38.111 21:05:32 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:04:38.111 [2024-04-23 21:05:32.277558] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:38.111 [2024-04-23 21:05:32.277675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221770 ] 00:04:38.111 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.370 [2024-04-23 21:05:32.558253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.370 [2024-04-23 21:05:32.636047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.938 21:05:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.938 21:05:33 -- common/autotest_common.sh@850 -- # return 0 00:04:38.938 21:05:33 -- json_config/common.sh@26 -- # echo '' 00:04:38.938 00:04:38.938 21:05:33 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.938 INFO: shutting down applications... 00:04:38.938 21:05:33 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.938 21:05:33 -- json_config/common.sh@31 -- # local app=target 00:04:38.938 21:05:33 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.938 21:05:33 -- json_config/common.sh@35 -- # [[ -n 1221770 ]] 00:04:38.938 21:05:33 -- json_config/common.sh@38 -- # kill -SIGINT 1221770 00:04:38.938 21:05:33 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.938 21:05:33 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.938 21:05:33 -- json_config/common.sh@41 -- # kill -0 1221770 00:04:38.938 21:05:33 -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.507 21:05:33 -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.507 21:05:33 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.507 21:05:33 -- json_config/common.sh@41 -- # kill -0 1221770 00:04:39.507 21:05:33 -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.075 21:05:34 -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.075 21:05:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.075 21:05:34 -- json_config/common.sh@41 -- # kill -0 1221770 00:04:40.075 21:05:34 -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.334 21:05:34 -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.334 21:05:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.334 21:05:34 -- json_config/common.sh@41 -- # kill -0 1221770 00:04:40.334 21:05:34 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.334 21:05:34 -- json_config/common.sh@43 -- # break 00:04:40.334 21:05:34 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.334 21:05:34 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.334 SPDK target shutdown done 00:04:40.334 21:05:34 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.334 Success 00:04:40.334 00:04:40.334 real 0m2.424s 00:04:40.334 user 0m1.785s 00:04:40.334 sys 0m0.468s 00:04:40.334 21:05:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.334 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.334 ************************************ 00:04:40.334 END TEST json_config_extra_key 00:04:40.334 ************************************ 00:04:40.334 21:05:34 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.334 21:05:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.334 21:05:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.334 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.593 ************************************ 00:04:40.593 START TEST alias_rpc 00:04:40.593 ************************************ 00:04:40.593 21:05:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.593 * Looking for test storage... 00:04:40.593 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.593 21:05:34 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.593 21:05:34 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1222169 00:04:40.593 21:05:34 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1222169 00:04:40.593 21:05:34 -- common/autotest_common.sh@817 -- # '[' -z 1222169 ']' 00:04:40.593 21:05:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.593 21:05:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:40.593 21:05:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.593 21:05:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:40.593 21:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:40.593 21:05:34 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.852 [2024-04-23 21:05:34.894166] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:40.852 [2024-04-23 21:05:34.894312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222169 ] 00:04:40.852 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.852 [2024-04-23 21:05:35.036431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.111 [2024-04-23 21:05:35.131249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.679 21:05:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.679 21:05:35 -- common/autotest_common.sh@850 -- # return 0 00:04:41.679 21:05:35 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:41.679 21:05:35 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1222169 00:04:41.679 21:05:35 -- common/autotest_common.sh@936 -- # '[' -z 1222169 ']' 00:04:41.679 21:05:35 -- common/autotest_common.sh@940 -- # kill -0 1222169 00:04:41.679 21:05:35 -- common/autotest_common.sh@941 -- # uname 00:04:41.679 21:05:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.679 21:05:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1222169 00:04:41.679 21:05:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.679 21:05:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.679 21:05:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1222169' 00:04:41.679 killing process with pid 1222169 00:04:41.679 21:05:35 -- common/autotest_common.sh@955 -- # kill 1222169 00:04:41.679 21:05:35 -- common/autotest_common.sh@960 -- # wait 1222169 00:04:42.614 00:04:42.614 real 0m2.062s 00:04:42.614 user 0m2.080s 00:04:42.614 sys 0m0.515s 00:04:42.614 21:05:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.614 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.614 ************************************ 00:04:42.614 END TEST alias_rpc 00:04:42.614 ************************************ 00:04:42.614 21:05:36 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:42.614 21:05:36 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.614 21:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.614 21:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.614 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.873 ************************************ 00:04:42.873 START TEST spdkcli_tcp 00:04:42.873 ************************************ 00:04:42.873 21:05:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.873 * Looking for test storage... 00:04:42.873 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.873 21:05:36 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.873 21:05:36 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.873 21:05:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:42.873 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1222805 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@27 -- # waitforlisten 1222805 00:04:42.873 21:05:36 -- common/autotest_common.sh@817 -- # '[' -z 1222805 ']' 00:04:42.873 21:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.873 21:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.873 21:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.873 21:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.873 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:42.873 21:05:36 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.873 [2024-04-23 21:05:37.064336] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:42.873 [2024-04-23 21:05:37.064471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222805 ] 00:04:43.132 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.132 [2024-04-23 21:05:37.198121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.132 [2024-04-23 21:05:37.294974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.132 [2024-04-23 21:05:37.294983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.699 21:05:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.699 21:05:37 -- common/autotest_common.sh@850 -- # return 0 00:04:43.699 21:05:37 -- spdkcli/tcp.sh@31 -- # socat_pid=1222828 00:04:43.699 21:05:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.699 21:05:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.699 [ 00:04:43.699 "bdev_malloc_delete", 00:04:43.699 "bdev_malloc_create", 00:04:43.699 "bdev_null_resize", 00:04:43.699 "bdev_null_delete", 00:04:43.699 "bdev_null_create", 00:04:43.699 "bdev_nvme_cuse_unregister", 00:04:43.699 "bdev_nvme_cuse_register", 00:04:43.699 "bdev_opal_new_user", 00:04:43.699 "bdev_opal_set_lock_state", 00:04:43.699 "bdev_opal_delete", 00:04:43.699 "bdev_opal_get_info", 00:04:43.699 "bdev_opal_create", 00:04:43.699 "bdev_nvme_opal_revert", 00:04:43.699 "bdev_nvme_opal_init", 00:04:43.699 "bdev_nvme_send_cmd", 00:04:43.699 "bdev_nvme_get_path_iostat", 00:04:43.699 "bdev_nvme_get_mdns_discovery_info", 00:04:43.699 "bdev_nvme_stop_mdns_discovery", 00:04:43.699 "bdev_nvme_start_mdns_discovery", 00:04:43.699 "bdev_nvme_set_multipath_policy", 00:04:43.699 "bdev_nvme_set_preferred_path", 00:04:43.699 "bdev_nvme_get_io_paths", 00:04:43.699 "bdev_nvme_remove_error_injection", 00:04:43.699 "bdev_nvme_add_error_injection", 00:04:43.699 "bdev_nvme_get_discovery_info", 00:04:43.699 "bdev_nvme_stop_discovery", 00:04:43.699 "bdev_nvme_start_discovery", 00:04:43.699 "bdev_nvme_get_controller_health_info", 00:04:43.699 "bdev_nvme_disable_controller", 00:04:43.699 "bdev_nvme_enable_controller", 00:04:43.699 "bdev_nvme_reset_controller", 00:04:43.699 "bdev_nvme_get_transport_statistics", 00:04:43.699 "bdev_nvme_apply_firmware", 00:04:43.699 "bdev_nvme_detach_controller", 00:04:43.699 "bdev_nvme_get_controllers", 00:04:43.699 "bdev_nvme_attach_controller", 00:04:43.699 "bdev_nvme_set_hotplug", 00:04:43.699 "bdev_nvme_set_options", 00:04:43.699 "bdev_passthru_delete", 00:04:43.699 "bdev_passthru_create", 00:04:43.699 "bdev_lvol_grow_lvstore", 00:04:43.699 "bdev_lvol_get_lvols", 00:04:43.699 "bdev_lvol_get_lvstores", 00:04:43.699 "bdev_lvol_delete", 00:04:43.699 "bdev_lvol_set_read_only", 00:04:43.699 "bdev_lvol_resize", 00:04:43.699 "bdev_lvol_decouple_parent", 00:04:43.699 "bdev_lvol_inflate", 00:04:43.699 "bdev_lvol_rename", 00:04:43.699 "bdev_lvol_clone_bdev", 00:04:43.699 "bdev_lvol_clone", 00:04:43.699 "bdev_lvol_snapshot", 00:04:43.699 "bdev_lvol_create", 00:04:43.699 "bdev_lvol_delete_lvstore", 00:04:43.699 "bdev_lvol_rename_lvstore", 00:04:43.699 "bdev_lvol_create_lvstore", 00:04:43.699 "bdev_raid_set_options", 00:04:43.699 "bdev_raid_remove_base_bdev", 00:04:43.699 "bdev_raid_add_base_bdev", 00:04:43.699 "bdev_raid_delete", 00:04:43.699 "bdev_raid_create", 00:04:43.699 "bdev_raid_get_bdevs", 00:04:43.699 "bdev_error_inject_error", 00:04:43.699 "bdev_error_delete", 00:04:43.699 "bdev_error_create", 00:04:43.699 "bdev_split_delete", 00:04:43.699 "bdev_split_create", 00:04:43.699 "bdev_delay_delete", 00:04:43.699 "bdev_delay_create", 00:04:43.699 "bdev_delay_update_latency", 00:04:43.699 "bdev_zone_block_delete", 00:04:43.699 "bdev_zone_block_create", 00:04:43.699 "blobfs_create", 00:04:43.699 "blobfs_detect", 00:04:43.699 "blobfs_set_cache_size", 00:04:43.699 "bdev_aio_delete", 00:04:43.699 "bdev_aio_rescan", 00:04:43.699 "bdev_aio_create", 00:04:43.699 "bdev_ftl_set_property", 00:04:43.699 "bdev_ftl_get_properties", 00:04:43.699 "bdev_ftl_get_stats", 00:04:43.699 "bdev_ftl_unmap", 00:04:43.699 "bdev_ftl_unload", 00:04:43.699 "bdev_ftl_delete", 00:04:43.699 "bdev_ftl_load", 00:04:43.699 "bdev_ftl_create", 00:04:43.699 "bdev_virtio_attach_controller", 00:04:43.699 "bdev_virtio_scsi_get_devices", 00:04:43.699 "bdev_virtio_detach_controller", 00:04:43.699 "bdev_virtio_blk_set_hotplug", 00:04:43.699 "bdev_iscsi_delete", 00:04:43.699 "bdev_iscsi_create", 00:04:43.699 "bdev_iscsi_set_options", 00:04:43.699 "accel_error_inject_error", 00:04:43.699 "ioat_scan_accel_module", 00:04:43.699 "dsa_scan_accel_module", 00:04:43.699 "iaa_scan_accel_module", 00:04:43.699 "keyring_file_remove_key", 00:04:43.699 "keyring_file_add_key", 00:04:43.699 "iscsi_get_histogram", 00:04:43.699 "iscsi_enable_histogram", 00:04:43.699 "iscsi_set_options", 00:04:43.699 "iscsi_get_auth_groups", 00:04:43.699 "iscsi_auth_group_remove_secret", 00:04:43.699 "iscsi_auth_group_add_secret", 00:04:43.699 "iscsi_delete_auth_group", 00:04:43.699 "iscsi_create_auth_group", 00:04:43.699 "iscsi_set_discovery_auth", 00:04:43.699 "iscsi_get_options", 00:04:43.699 "iscsi_target_node_request_logout", 00:04:43.699 "iscsi_target_node_set_redirect", 00:04:43.699 "iscsi_target_node_set_auth", 00:04:43.699 "iscsi_target_node_add_lun", 00:04:43.699 "iscsi_get_stats", 00:04:43.699 "iscsi_get_connections", 00:04:43.699 "iscsi_portal_group_set_auth", 00:04:43.699 "iscsi_start_portal_group", 00:04:43.699 "iscsi_delete_portal_group", 00:04:43.699 "iscsi_create_portal_group", 00:04:43.699 "iscsi_get_portal_groups", 00:04:43.699 "iscsi_delete_target_node", 00:04:43.699 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.700 "iscsi_target_node_add_pg_ig_maps", 00:04:43.700 "iscsi_create_target_node", 00:04:43.700 "iscsi_get_target_nodes", 00:04:43.700 "iscsi_delete_initiator_group", 00:04:43.700 "iscsi_initiator_group_remove_initiators", 00:04:43.700 "iscsi_initiator_group_add_initiators", 00:04:43.700 "iscsi_create_initiator_group", 00:04:43.700 "iscsi_get_initiator_groups", 00:04:43.700 "nvmf_set_crdt", 00:04:43.700 "nvmf_set_config", 00:04:43.700 "nvmf_set_max_subsystems", 00:04:43.700 "nvmf_subsystem_get_listeners", 00:04:43.700 "nvmf_subsystem_get_qpairs", 00:04:43.700 "nvmf_subsystem_get_controllers", 00:04:43.700 "nvmf_get_stats", 00:04:43.700 "nvmf_get_transports", 00:04:43.700 "nvmf_create_transport", 00:04:43.700 "nvmf_get_targets", 00:04:43.700 "nvmf_delete_target", 00:04:43.700 "nvmf_create_target", 00:04:43.700 "nvmf_subsystem_allow_any_host", 00:04:43.700 "nvmf_subsystem_remove_host", 00:04:43.700 "nvmf_subsystem_add_host", 00:04:43.700 "nvmf_ns_remove_host", 00:04:43.700 "nvmf_ns_add_host", 00:04:43.700 "nvmf_subsystem_remove_ns", 00:04:43.700 "nvmf_subsystem_add_ns", 00:04:43.700 "nvmf_subsystem_listener_set_ana_state", 00:04:43.700 "nvmf_discovery_get_referrals", 00:04:43.700 "nvmf_discovery_remove_referral", 00:04:43.700 "nvmf_discovery_add_referral", 00:04:43.700 "nvmf_subsystem_remove_listener", 00:04:43.700 "nvmf_subsystem_add_listener", 00:04:43.700 "nvmf_delete_subsystem", 00:04:43.700 "nvmf_create_subsystem", 00:04:43.700 "nvmf_get_subsystems", 00:04:43.700 "env_dpdk_get_mem_stats", 00:04:43.700 "nbd_get_disks", 00:04:43.700 "nbd_stop_disk", 00:04:43.700 "nbd_start_disk", 00:04:43.700 "ublk_recover_disk", 00:04:43.700 "ublk_get_disks", 00:04:43.700 "ublk_stop_disk", 00:04:43.700 "ublk_start_disk", 00:04:43.700 "ublk_destroy_target", 00:04:43.700 "ublk_create_target", 00:04:43.700 "virtio_blk_create_transport", 00:04:43.700 "virtio_blk_get_transports", 00:04:43.700 "vhost_controller_set_coalescing", 00:04:43.700 "vhost_get_controllers", 00:04:43.700 "vhost_delete_controller", 00:04:43.700 "vhost_create_blk_controller", 00:04:43.700 "vhost_scsi_controller_remove_target", 00:04:43.700 "vhost_scsi_controller_add_target", 00:04:43.700 "vhost_start_scsi_controller", 00:04:43.700 "vhost_create_scsi_controller", 00:04:43.700 "thread_set_cpumask", 00:04:43.700 "framework_get_scheduler", 00:04:43.700 "framework_set_scheduler", 00:04:43.700 "framework_get_reactors", 00:04:43.700 "thread_get_io_channels", 00:04:43.700 "thread_get_pollers", 00:04:43.700 "thread_get_stats", 00:04:43.700 "framework_monitor_context_switch", 00:04:43.700 "spdk_kill_instance", 00:04:43.700 "log_enable_timestamps", 00:04:43.700 "log_get_flags", 00:04:43.700 "log_clear_flag", 00:04:43.700 "log_set_flag", 00:04:43.700 "log_get_level", 00:04:43.700 "log_set_level", 00:04:43.700 "log_get_print_level", 00:04:43.700 "log_set_print_level", 00:04:43.700 "framework_enable_cpumask_locks", 00:04:43.700 "framework_disable_cpumask_locks", 00:04:43.700 "framework_wait_init", 00:04:43.700 "framework_start_init", 00:04:43.700 "scsi_get_devices", 00:04:43.700 "bdev_get_histogram", 00:04:43.700 "bdev_enable_histogram", 00:04:43.700 "bdev_set_qos_limit", 00:04:43.700 "bdev_set_qd_sampling_period", 00:04:43.700 "bdev_get_bdevs", 00:04:43.700 "bdev_reset_iostat", 00:04:43.700 "bdev_get_iostat", 00:04:43.700 "bdev_examine", 00:04:43.700 "bdev_wait_for_examine", 00:04:43.700 "bdev_set_options", 00:04:43.700 "notify_get_notifications", 00:04:43.700 "notify_get_types", 00:04:43.700 "accel_get_stats", 00:04:43.700 "accel_set_options", 00:04:43.700 "accel_set_driver", 00:04:43.700 "accel_crypto_key_destroy", 00:04:43.700 "accel_crypto_keys_get", 00:04:43.700 "accel_crypto_key_create", 00:04:43.700 "accel_assign_opc", 00:04:43.700 "accel_get_module_info", 00:04:43.700 "accel_get_opc_assignments", 00:04:43.700 "vmd_rescan", 00:04:43.700 "vmd_remove_device", 00:04:43.700 "vmd_enable", 00:04:43.700 "sock_set_default_impl", 00:04:43.700 "sock_impl_set_options", 00:04:43.700 "sock_impl_get_options", 00:04:43.700 "iobuf_get_stats", 00:04:43.700 "iobuf_set_options", 00:04:43.700 "framework_get_pci_devices", 00:04:43.700 "framework_get_config", 00:04:43.700 "framework_get_subsystems", 00:04:43.700 "trace_get_info", 00:04:43.700 "trace_get_tpoint_group_mask", 00:04:43.700 "trace_disable_tpoint_group", 00:04:43.700 "trace_enable_tpoint_group", 00:04:43.700 "trace_clear_tpoint_mask", 00:04:43.700 "trace_set_tpoint_mask", 00:04:43.700 "keyring_get_keys", 00:04:43.700 "spdk_get_version", 00:04:43.700 "rpc_get_methods" 00:04:43.700 ] 00:04:43.700 21:05:37 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.700 21:05:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.700 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:43.700 21:05:37 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.700 21:05:37 -- spdkcli/tcp.sh@38 -- # killprocess 1222805 00:04:43.700 21:05:37 -- common/autotest_common.sh@936 -- # '[' -z 1222805 ']' 00:04:43.700 21:05:37 -- common/autotest_common.sh@940 -- # kill -0 1222805 00:04:43.700 21:05:37 -- common/autotest_common.sh@941 -- # uname 00:04:43.700 21:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.700 21:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1222805 00:04:43.959 21:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.959 21:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.959 21:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1222805' 00:04:43.959 killing process with pid 1222805 00:04:43.959 21:05:37 -- common/autotest_common.sh@955 -- # kill 1222805 00:04:43.959 21:05:37 -- common/autotest_common.sh@960 -- # wait 1222805 00:04:44.896 00:04:44.896 real 0m1.925s 00:04:44.896 user 0m3.222s 00:04:44.896 sys 0m0.532s 00:04:44.896 21:05:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.896 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.896 ************************************ 00:04:44.896 END TEST spdkcli_tcp 00:04:44.896 ************************************ 00:04:44.896 21:05:38 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.896 21:05:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.896 21:05:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.896 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:44.896 ************************************ 00:04:44.896 START TEST dpdk_mem_utility 00:04:44.896 ************************************ 00:04:44.896 21:05:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.896 * Looking for test storage... 00:04:44.896 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:04:44.896 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.896 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1223185 00:04:44.896 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1223185 00:04:44.896 21:05:39 -- common/autotest_common.sh@817 -- # '[' -z 1223185 ']' 00:04:44.896 21:05:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.896 21:05:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.896 21:05:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.896 21:05:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.896 21:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:44.896 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.896 [2024-04-23 21:05:39.129791] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:44.896 [2024-04-23 21:05:39.129923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223185 ] 00:04:45.154 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.154 [2024-04-23 21:05:39.262811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.154 [2024-04-23 21:05:39.359422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.722 21:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.722 21:05:39 -- common/autotest_common.sh@850 -- # return 0 00:04:45.722 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:45.722 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:45.722 21:05:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:45.722 21:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:45.722 { 00:04:45.722 "filename": "/tmp/spdk_mem_dump.txt" 00:04:45.722 } 00:04:45.722 21:05:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:45.722 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:45.722 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:45.722 1 heaps totaling size 820.000000 MiB 00:04:45.722 size: 820.000000 MiB heap id: 0 00:04:45.722 end heaps---------- 00:04:45.722 8 mempools totaling size 598.116089 MiB 00:04:45.722 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:45.722 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:45.722 size: 84.521057 MiB name: bdev_io_1223185 00:04:45.722 size: 51.011292 MiB name: evtpool_1223185 00:04:45.722 size: 50.003479 MiB name: msgpool_1223185 00:04:45.722 size: 21.763794 MiB name: PDU_Pool 00:04:45.722 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:45.722 size: 0.026123 MiB name: Session_Pool 00:04:45.722 end mempools------- 00:04:45.722 6 memzones totaling size 4.142822 MiB 00:04:45.722 size: 1.000366 MiB name: RG_ring_0_1223185 00:04:45.722 size: 1.000366 MiB name: RG_ring_1_1223185 00:04:45.722 size: 1.000366 MiB name: RG_ring_4_1223185 00:04:45.722 size: 1.000366 MiB name: RG_ring_5_1223185 00:04:45.722 size: 0.125366 MiB name: RG_ring_2_1223185 00:04:45.722 size: 0.015991 MiB name: RG_ring_3_1223185 00:04:45.722 end memzones------- 00:04:45.722 21:05:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:45.982 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:04:45.982 list of free elements. size: 18.514832 MiB 00:04:45.982 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:45.982 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:45.982 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:45.982 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:45.982 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:45.982 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:45.982 element at address: 0x200019600000 with size: 0.999329 MiB 00:04:45.982 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:45.982 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:45.982 element at address: 0x200018e00000 with size: 0.959900 MiB 00:04:45.982 element at address: 0x200019900040 with size: 0.937256 MiB 00:04:45.982 element at address: 0x200000200000 with size: 0.840942 MiB 00:04:45.982 element at address: 0x20001b000000 with size: 0.583191 MiB 00:04:45.982 element at address: 0x200019200000 with size: 0.491150 MiB 00:04:45.982 element at address: 0x200019a00000 with size: 0.485657 MiB 00:04:45.982 element at address: 0x200013800000 with size: 0.470581 MiB 00:04:45.982 element at address: 0x200028400000 with size: 0.411072 MiB 00:04:45.982 element at address: 0x200003a00000 with size: 0.356140 MiB 00:04:45.982 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:04:45.982 list of standard malloc elements. size: 199.220764 MiB 00:04:45.982 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:45.982 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:45.982 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:45.982 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:45.982 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:45.982 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:45.982 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:45.982 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:45.982 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:04:45.982 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:04:45.982 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:45.982 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:45.982 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:45.982 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:45.982 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:45.982 list of memzone associated elements. size: 602.264404 MiB 00:04:45.982 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:45.982 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:45.982 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:45.982 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:45.982 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:45.982 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1223185_0 00:04:45.982 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:45.982 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1223185_0 00:04:45.982 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:45.982 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1223185_0 00:04:45.982 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:45.982 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:45.982 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:45.982 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:45.982 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:45.982 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1223185 00:04:45.982 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:45.982 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1223185 00:04:45.982 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:45.982 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1223185 00:04:45.982 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:45.982 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:45.982 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:45.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:45.982 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:45.982 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:45.982 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1223185 00:04:45.982 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1223185 00:04:45.982 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1223185 00:04:45.982 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:45.982 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1223185 00:04:45.982 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1223185 00:04:45.982 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:04:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:45.982 element at address: 0x200013878780 with size: 0.500549 MiB 00:04:45.982 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:45.982 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:04:45.982 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:45.982 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:45.982 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1223185 00:04:45.982 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:04:45.982 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:45.982 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:04:45.982 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:45.982 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:45.982 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1223185 00:04:45.982 element at address: 0x20002846f540 with size: 0.002502 MiB 00:04:45.982 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:45.982 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:04:45.982 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1223185 00:04:45.982 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:45.982 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1223185 00:04:45.982 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:04:45.982 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:45.982 21:05:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:45.982 21:05:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1223185 00:04:45.982 21:05:40 -- common/autotest_common.sh@936 -- # '[' -z 1223185 ']' 00:04:45.982 21:05:40 -- common/autotest_common.sh@940 -- # kill -0 1223185 00:04:45.982 21:05:40 -- common/autotest_common.sh@941 -- # uname 00:04:45.983 21:05:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.983 21:05:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1223185 00:04:45.983 21:05:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.983 21:05:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.983 21:05:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1223185' 00:04:45.983 killing process with pid 1223185 00:04:45.983 21:05:40 -- common/autotest_common.sh@955 -- # kill 1223185 00:04:45.983 21:05:40 -- common/autotest_common.sh@960 -- # wait 1223185 00:04:46.918 00:04:46.918 real 0m1.915s 00:04:46.918 user 0m1.892s 00:04:46.918 sys 0m0.498s 00:04:46.918 21:05:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.918 21:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.918 ************************************ 00:04:46.918 END TEST dpdk_mem_utility 00:04:46.918 ************************************ 00:04:46.918 21:05:40 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:46.918 21:05:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.918 21:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.918 21:05:40 -- common/autotest_common.sh@10 -- # set +x 00:04:46.918 ************************************ 00:04:46.918 START TEST event 00:04:46.918 ************************************ 00:04:46.918 21:05:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:04:46.918 * Looking for test storage... 00:04:46.918 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:04:46.918 21:05:41 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:46.918 21:05:41 -- bdev/nbd_common.sh@6 -- # set -e 00:04:46.918 21:05:41 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:46.918 21:05:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:46.918 21:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.918 21:05:41 -- common/autotest_common.sh@10 -- # set +x 00:04:46.918 ************************************ 00:04:46.918 START TEST event_perf 00:04:46.918 ************************************ 00:04:46.918 21:05:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.177 Running I/O for 1 seconds...[2024-04-23 21:05:41.196472] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:47.177 [2024-04-23 21:05:41.196535] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223727 ] 00:04:47.177 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.177 [2024-04-23 21:05:41.287348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.177 [2024-04-23 21:05:41.381840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.177 [2024-04-23 21:05:41.381950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.177 [2024-04-23 21:05:41.381969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.177 [2024-04-23 21:05:41.381969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.553 Running I/O for 1 seconds... 00:04:48.553 lcore 0: 167334 00:04:48.553 lcore 1: 167335 00:04:48.553 lcore 2: 167337 00:04:48.553 lcore 3: 167337 00:04:48.553 done. 00:04:48.553 00:04:48.553 real 0m1.363s 00:04:48.553 user 0m4.242s 00:04:48.553 sys 0m0.108s 00:04:48.553 21:05:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.553 21:05:42 -- common/autotest_common.sh@10 -- # set +x 00:04:48.553 ************************************ 00:04:48.553 END TEST event_perf 00:04:48.553 ************************************ 00:04:48.553 21:05:42 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.553 21:05:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:48.553 21:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.553 21:05:42 -- common/autotest_common.sh@10 -- # set +x 00:04:48.553 ************************************ 00:04:48.553 START TEST event_reactor 00:04:48.553 ************************************ 00:04:48.553 21:05:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.553 [2024-04-23 21:05:42.681292] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:48.553 [2024-04-23 21:05:42.681394] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224015 ] 00:04:48.553 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.553 [2024-04-23 21:05:42.797789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.811 [2024-04-23 21:05:42.905934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.186 test_start 00:04:50.186 oneshot 00:04:50.186 tick 100 00:04:50.186 tick 100 00:04:50.186 tick 250 00:04:50.186 tick 100 00:04:50.186 tick 100 00:04:50.186 tick 100 00:04:50.186 tick 250 00:04:50.186 tick 500 00:04:50.186 tick 100 00:04:50.186 tick 100 00:04:50.186 tick 250 00:04:50.186 tick 100 00:04:50.186 tick 100 00:04:50.186 test_end 00:04:50.186 00:04:50.186 real 0m1.405s 00:04:50.186 user 0m1.269s 00:04:50.186 sys 0m0.130s 00:04:50.186 21:05:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.186 21:05:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.186 ************************************ 00:04:50.186 END TEST event_reactor 00:04:50.186 ************************************ 00:04:50.186 21:05:44 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.186 21:05:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:50.186 21:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.186 21:05:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.186 ************************************ 00:04:50.186 START TEST event_reactor_perf 00:04:50.186 ************************************ 00:04:50.186 21:05:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.186 [2024-04-23 21:05:44.204278] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:50.186 [2024-04-23 21:05:44.204383] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224337 ] 00:04:50.186 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.186 [2024-04-23 21:05:44.326499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.186 [2024-04-23 21:05:44.424411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.564 test_start 00:04:51.565 test_end 00:04:51.565 Performance: 412689 events per second 00:04:51.565 00:04:51.565 real 0m1.407s 00:04:51.565 user 0m1.264s 00:04:51.565 sys 0m0.137s 00:04:51.565 21:05:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.565 21:05:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.565 ************************************ 00:04:51.565 END TEST event_reactor_perf 00:04:51.565 ************************************ 00:04:51.565 21:05:45 -- event/event.sh@49 -- # uname -s 00:04:51.565 21:05:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:51.565 21:05:45 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.565 21:05:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.565 21:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.565 21:05:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.565 ************************************ 00:04:51.565 START TEST event_scheduler 00:04:51.565 ************************************ 00:04:51.565 21:05:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.565 * Looking for test storage... 00:04:51.565 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:04:51.565 21:05:45 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:51.565 21:05:45 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1224757 00:04:51.565 21:05:45 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.565 21:05:45 -- scheduler/scheduler.sh@37 -- # waitforlisten 1224757 00:04:51.565 21:05:45 -- common/autotest_common.sh@817 -- # '[' -z 1224757 ']' 00:04:51.565 21:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.565 21:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.565 21:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.565 21:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.565 21:05:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.565 21:05:45 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:51.824 [2024-04-23 21:05:45.861364] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:51.824 [2024-04-23 21:05:45.861476] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224757 ] 00:04:51.824 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.824 [2024-04-23 21:05:45.954218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.824 [2024-04-23 21:05:46.053130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.824 [2024-04-23 21:05:46.053242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.824 [2024-04-23 21:05:46.053349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.824 [2024-04-23 21:05:46.053361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.391 21:05:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.391 21:05:46 -- common/autotest_common.sh@850 -- # return 0 00:04:52.391 21:05:46 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:52.391 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.391 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.391 POWER: Env isn't set yet! 00:04:52.391 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:52.391 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.391 POWER: Cannot set governor of lcore 0 to userspace 00:04:52.391 POWER: Attempting to initialise PSTAT power management... 00:04:52.651 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:52.651 POWER: Initialized successfully for lcore 0 power management 00:04:52.651 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:52.651 POWER: Initialized successfully for lcore 1 power management 00:04:52.651 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:52.651 POWER: Initialized successfully for lcore 2 power management 00:04:52.651 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:52.651 POWER: Initialized successfully for lcore 3 power management 00:04:52.651 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.651 21:05:46 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:52.651 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.651 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.651 [2024-04-23 21:05:46.843526] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:52.651 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.651 21:05:46 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:52.651 21:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.651 21:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.651 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 ************************************ 00:04:52.911 START TEST scheduler_create_thread 00:04:52.911 ************************************ 00:04:52.911 21:05:46 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 2 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 3 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 4 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 5 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 6 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 7 00:04:52.911 21:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:46 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:52.911 21:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 8 00:04:52.911 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.911 21:05:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:52.911 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.911 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.911 9 00:04:52.911 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:52.912 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.912 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.912 10 00:04:52.912 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:52.912 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.912 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.912 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:52.912 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.912 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.912 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:52.912 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.912 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.912 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.912 21:05:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.912 21:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.912 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:53.482 21:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.482 00:04:53.482 real 0m0.592s 00:04:53.482 user 0m0.011s 00:04:53.482 sys 0m0.004s 00:04:53.482 21:05:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.482 21:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:53.482 ************************************ 00:04:53.482 END TEST scheduler_create_thread 00:04:53.482 ************************************ 00:04:53.482 21:05:47 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.482 21:05:47 -- scheduler/scheduler.sh@46 -- # killprocess 1224757 00:04:53.482 21:05:47 -- common/autotest_common.sh@936 -- # '[' -z 1224757 ']' 00:04:53.483 21:05:47 -- common/autotest_common.sh@940 -- # kill -0 1224757 00:04:53.483 21:05:47 -- common/autotest_common.sh@941 -- # uname 00:04:53.483 21:05:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.483 21:05:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1224757 00:04:53.483 21:05:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:53.483 21:05:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:53.483 21:05:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1224757' 00:04:53.483 killing process with pid 1224757 00:04:53.483 21:05:47 -- common/autotest_common.sh@955 -- # kill 1224757 00:04:53.483 21:05:47 -- common/autotest_common.sh@960 -- # wait 1224757 00:04:53.744 [2024-04-23 21:05:48.015825] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.392 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:54.392 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:54.392 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:54.392 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:54.392 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:54.392 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:54.392 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:54.392 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:54.392 00:04:54.392 real 0m2.766s 00:04:54.392 user 0m5.118s 00:04:54.392 sys 0m0.461s 00:04:54.392 21:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.392 21:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 ************************************ 00:04:54.392 END TEST event_scheduler 00:04:54.392 ************************************ 00:04:54.392 21:05:48 -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.392 21:05:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.392 21:05:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.392 21:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.392 21:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 ************************************ 00:04:54.392 START TEST app_repeat 00:04:54.392 ************************************ 00:04:54.392 21:05:48 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:54.392 21:05:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.392 21:05:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.392 21:05:48 -- event/event.sh@13 -- # local nbd_list 00:04:54.392 21:05:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.392 21:05:48 -- event/event.sh@14 -- # local bdev_list 00:04:54.392 21:05:48 -- event/event.sh@15 -- # local repeat_times=4 00:04:54.392 21:05:48 -- event/event.sh@17 -- # modprobe nbd 00:04:54.392 21:05:48 -- event/event.sh@19 -- # repeat_pid=1225246 00:04:54.392 21:05:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.392 21:05:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1225246' 00:04:54.392 Process app_repeat pid: 1225246 00:04:54.392 21:05:48 -- event/event.sh@23 -- # for i in {0..2} 00:04:54.392 21:05:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:54.392 spdk_app_start Round 0 00:04:54.392 21:05:48 -- event/event.sh@25 -- # waitforlisten 1225246 /var/tmp/spdk-nbd.sock 00:04:54.392 21:05:48 -- common/autotest_common.sh@817 -- # '[' -z 1225246 ']' 00:04:54.392 21:05:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.392 21:05:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.392 21:05:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.392 21:05:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.392 21:05:48 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:54.392 21:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.392 [2024-04-23 21:05:48.653001] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:04:54.392 [2024-04-23 21:05:48.653105] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225246 ] 00:04:54.651 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.651 [2024-04-23 21:05:48.771774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.651 [2024-04-23 21:05:48.870623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.651 [2024-04-23 21:05:48.870625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.219 21:05:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.219 21:05:49 -- common/autotest_common.sh@850 -- # return 0 00:04:55.219 21:05:49 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.478 Malloc0 00:04:55.478 21:05:49 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.478 Malloc1 00:04:55.479 21:05:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@12 -- # local i 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.479 21:05:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.738 /dev/nbd0 00:04:55.738 21:05:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.738 21:05:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.738 21:05:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:55.738 21:05:49 -- common/autotest_common.sh@855 -- # local i 00:04:55.738 21:05:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:55.738 21:05:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:55.738 21:05:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:55.738 21:05:49 -- common/autotest_common.sh@859 -- # break 00:04:55.738 21:05:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:55.738 21:05:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:55.738 21:05:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.738 1+0 records in 00:04:55.738 1+0 records out 00:04:55.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162281 s, 25.2 MB/s 00:04:55.738 21:05:49 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:55.738 21:05:49 -- common/autotest_common.sh@872 -- # size=4096 00:04:55.738 21:05:49 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:55.738 21:05:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:55.738 21:05:49 -- common/autotest_common.sh@875 -- # return 0 00:04:55.738 21:05:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.738 21:05:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.738 21:05:49 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.996 /dev/nbd1 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.996 21:05:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:55.996 21:05:50 -- common/autotest_common.sh@855 -- # local i 00:04:55.996 21:05:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:55.996 21:05:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:55.996 21:05:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:55.996 21:05:50 -- common/autotest_common.sh@859 -- # break 00:04:55.996 21:05:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:55.996 21:05:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:55.996 21:05:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.996 1+0 records in 00:04:55.996 1+0 records out 00:04:55.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291876 s, 14.0 MB/s 00:04:55.996 21:05:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:55.996 21:05:50 -- common/autotest_common.sh@872 -- # size=4096 00:04:55.996 21:05:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:04:55.996 21:05:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:55.996 21:05:50 -- common/autotest_common.sh@875 -- # return 0 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.996 { 00:04:55.996 "nbd_device": "/dev/nbd0", 00:04:55.996 "bdev_name": "Malloc0" 00:04:55.996 }, 00:04:55.996 { 00:04:55.996 "nbd_device": "/dev/nbd1", 00:04:55.996 "bdev_name": "Malloc1" 00:04:55.996 } 00:04:55.996 ]' 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.996 { 00:04:55.996 "nbd_device": "/dev/nbd0", 00:04:55.996 "bdev_name": "Malloc0" 00:04:55.996 }, 00:04:55.996 { 00:04:55.996 "nbd_device": "/dev/nbd1", 00:04:55.996 "bdev_name": "Malloc1" 00:04:55.996 } 00:04:55.996 ]' 00:04:55.996 21:05:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.256 /dev/nbd1' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.256 /dev/nbd1' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.256 256+0 records in 00:04:56.256 256+0 records out 00:04:56.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454807 s, 231 MB/s 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.256 256+0 records in 00:04:56.256 256+0 records out 00:04:56.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149626 s, 70.1 MB/s 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.256 256+0 records in 00:04:56.256 256+0 records out 00:04:56.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176274 s, 59.5 MB/s 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@51 -- # local i 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.256 21:05:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@41 -- # break 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@41 -- # break 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.515 21:05:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@65 -- # true 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.775 21:05:50 -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.775 21:05:50 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.034 21:05:51 -- event/event.sh@35 -- # sleep 3 00:04:57.293 [2024-04-23 21:05:51.560996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.552 [2024-04-23 21:05:51.651038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.552 [2024-04-23 21:05:51.651043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.552 [2024-04-23 21:05:51.729333] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.552 [2024-04-23 21:05:51.729374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.086 21:05:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:00.086 21:05:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.086 spdk_app_start Round 1 00:05:00.086 21:05:54 -- event/event.sh@25 -- # waitforlisten 1225246 /var/tmp/spdk-nbd.sock 00:05:00.086 21:05:54 -- common/autotest_common.sh@817 -- # '[' -z 1225246 ']' 00:05:00.086 21:05:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.086 21:05:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.086 21:05:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.086 21:05:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.086 21:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.086 21:05:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.086 21:05:54 -- common/autotest_common.sh@850 -- # return 0 00:05:00.086 21:05:54 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.344 Malloc0 00:05:00.344 21:05:54 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.344 Malloc1 00:05:00.344 21:05:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@12 -- # local i 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.344 21:05:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.603 /dev/nbd0 00:05:00.603 21:05:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.603 21:05:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.603 21:05:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:00.603 21:05:54 -- common/autotest_common.sh@855 -- # local i 00:05:00.603 21:05:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:00.603 21:05:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:00.603 21:05:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:00.603 21:05:54 -- common/autotest_common.sh@859 -- # break 00:05:00.603 21:05:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:00.603 21:05:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:00.603 21:05:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.603 1+0 records in 00:05:00.603 1+0 records out 00:05:00.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248177 s, 16.5 MB/s 00:05:00.603 21:05:54 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:00.603 21:05:54 -- common/autotest_common.sh@872 -- # size=4096 00:05:00.603 21:05:54 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:00.603 21:05:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:00.603 21:05:54 -- common/autotest_common.sh@875 -- # return 0 00:05:00.603 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.603 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.603 21:05:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.861 /dev/nbd1 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.861 21:05:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:00.861 21:05:54 -- common/autotest_common.sh@855 -- # local i 00:05:00.861 21:05:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:00.861 21:05:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:00.861 21:05:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:00.861 21:05:54 -- common/autotest_common.sh@859 -- # break 00:05:00.861 21:05:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:00.861 21:05:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:00.861 21:05:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.861 1+0 records in 00:05:00.861 1+0 records out 00:05:00.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170929 s, 24.0 MB/s 00:05:00.861 21:05:54 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:00.861 21:05:54 -- common/autotest_common.sh@872 -- # size=4096 00:05:00.861 21:05:54 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:00.861 21:05:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:00.861 21:05:54 -- common/autotest_common.sh@875 -- # return 0 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.861 21:05:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.861 21:05:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.861 { 00:05:00.861 "nbd_device": "/dev/nbd0", 00:05:00.861 "bdev_name": "Malloc0" 00:05:00.861 }, 00:05:00.861 { 00:05:00.861 "nbd_device": "/dev/nbd1", 00:05:00.861 "bdev_name": "Malloc1" 00:05:00.861 } 00:05:00.861 ]' 00:05:00.861 21:05:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.861 21:05:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.861 { 00:05:00.861 "nbd_device": "/dev/nbd0", 00:05:00.861 "bdev_name": "Malloc0" 00:05:00.861 }, 00:05:00.861 { 00:05:00.861 "nbd_device": "/dev/nbd1", 00:05:00.861 "bdev_name": "Malloc1" 00:05:00.861 } 00:05:00.861 ]' 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.119 /dev/nbd1' 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.119 /dev/nbd1' 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.119 21:05:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.120 256+0 records in 00:05:01.120 256+0 records out 00:05:01.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540238 s, 194 MB/s 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.120 256+0 records in 00:05:01.120 256+0 records out 00:05:01.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149117 s, 70.3 MB/s 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.120 256+0 records in 00:05:01.120 256+0 records out 00:05:01.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163729 s, 64.0 MB/s 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@51 -- # local i 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@41 -- # break 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.120 21:05:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.378 21:05:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.378 21:05:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.378 21:05:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.378 21:05:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@41 -- # break 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.379 21:05:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@65 -- # true 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.637 21:05:55 -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.637 21:05:55 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.896 21:05:55 -- event/event.sh@35 -- # sleep 3 00:05:02.154 [2024-04-23 21:05:56.405620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.412 [2024-04-23 21:05:56.493864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.412 [2024-04-23 21:05:56.493869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.412 [2024-04-23 21:05:56.568955] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.412 [2024-04-23 21:05:56.568992] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.940 21:05:58 -- event/event.sh@23 -- # for i in {0..2} 00:05:04.941 21:05:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.941 spdk_app_start Round 2 00:05:04.941 21:05:58 -- event/event.sh@25 -- # waitforlisten 1225246 /var/tmp/spdk-nbd.sock 00:05:04.941 21:05:58 -- common/autotest_common.sh@817 -- # '[' -z 1225246 ']' 00:05:04.941 21:05:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.941 21:05:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.941 21:05:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.941 21:05:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.941 21:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:04.941 21:05:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.941 21:05:59 -- common/autotest_common.sh@850 -- # return 0 00:05:04.941 21:05:59 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.199 Malloc0 00:05:05.199 21:05:59 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.199 Malloc1 00:05:05.199 21:05:59 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@12 -- # local i 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.199 21:05:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.458 /dev/nbd0 00:05:05.458 21:05:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.458 21:05:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.458 21:05:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:05.458 21:05:59 -- common/autotest_common.sh@855 -- # local i 00:05:05.458 21:05:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:05.458 21:05:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:05.458 21:05:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:05.458 21:05:59 -- common/autotest_common.sh@859 -- # break 00:05:05.458 21:05:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:05.458 21:05:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:05.458 21:05:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.458 1+0 records in 00:05:05.458 1+0 records out 00:05:05.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203305 s, 20.1 MB/s 00:05:05.458 21:05:59 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.458 21:05:59 -- common/autotest_common.sh@872 -- # size=4096 00:05:05.458 21:05:59 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.458 21:05:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:05.458 21:05:59 -- common/autotest_common.sh@875 -- # return 0 00:05:05.458 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.458 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.458 21:05:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.716 /dev/nbd1 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.716 21:05:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:05.716 21:05:59 -- common/autotest_common.sh@855 -- # local i 00:05:05.716 21:05:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:05.716 21:05:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:05.716 21:05:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:05.716 21:05:59 -- common/autotest_common.sh@859 -- # break 00:05:05.716 21:05:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:05.716 21:05:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:05.716 21:05:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.716 1+0 records in 00:05:05.716 1+0 records out 00:05:05.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316107 s, 13.0 MB/s 00:05:05.716 21:05:59 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.716 21:05:59 -- common/autotest_common.sh@872 -- # size=4096 00:05:05.716 21:05:59 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:05.716 21:05:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:05.716 21:05:59 -- common/autotest_common.sh@875 -- # return 0 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.716 { 00:05:05.716 "nbd_device": "/dev/nbd0", 00:05:05.716 "bdev_name": "Malloc0" 00:05:05.716 }, 00:05:05.716 { 00:05:05.716 "nbd_device": "/dev/nbd1", 00:05:05.716 "bdev_name": "Malloc1" 00:05:05.716 } 00:05:05.716 ]' 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.716 { 00:05:05.716 "nbd_device": "/dev/nbd0", 00:05:05.716 "bdev_name": "Malloc0" 00:05:05.716 }, 00:05:05.716 { 00:05:05.716 "nbd_device": "/dev/nbd1", 00:05:05.716 "bdev_name": "Malloc1" 00:05:05.716 } 00:05:05.716 ]' 00:05:05.716 21:05:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.975 /dev/nbd1' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.975 /dev/nbd1' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.975 256+0 records in 00:05:05.975 256+0 records out 00:05:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462716 s, 227 MB/s 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.975 256+0 records in 00:05:05.975 256+0 records out 00:05:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152852 s, 68.6 MB/s 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.975 256+0 records in 00:05:05.975 256+0 records out 00:05:05.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187028 s, 56.1 MB/s 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@51 -- # local i 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@41 -- # break 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.975 21:06:00 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@41 -- # break 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.234 21:06:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@65 -- # true 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.492 21:06:00 -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.492 21:06:00 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.750 21:06:00 -- event/event.sh@35 -- # sleep 3 00:05:07.316 [2024-04-23 21:06:01.312298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.316 [2024-04-23 21:06:01.399531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.316 [2024-04-23 21:06:01.399535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.316 [2024-04-23 21:06:01.473882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.316 [2024-04-23 21:06:01.473920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.844 21:06:03 -- event/event.sh@38 -- # waitforlisten 1225246 /var/tmp/spdk-nbd.sock 00:05:09.844 21:06:03 -- common/autotest_common.sh@817 -- # '[' -z 1225246 ']' 00:05:09.845 21:06:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.845 21:06:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.845 21:06:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.845 21:06:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.845 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:09.845 21:06:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.845 21:06:03 -- common/autotest_common.sh@850 -- # return 0 00:05:09.845 21:06:03 -- event/event.sh@39 -- # killprocess 1225246 00:05:09.845 21:06:03 -- common/autotest_common.sh@936 -- # '[' -z 1225246 ']' 00:05:09.845 21:06:03 -- common/autotest_common.sh@940 -- # kill -0 1225246 00:05:09.845 21:06:03 -- common/autotest_common.sh@941 -- # uname 00:05:09.845 21:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.845 21:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1225246 00:05:09.845 21:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.845 21:06:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.845 21:06:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1225246' 00:05:09.845 killing process with pid 1225246 00:05:09.845 21:06:04 -- common/autotest_common.sh@955 -- # kill 1225246 00:05:09.845 21:06:04 -- common/autotest_common.sh@960 -- # wait 1225246 00:05:10.126 spdk_app_start is called in Round 0. 00:05:10.126 Shutdown signal received, stop current app iteration 00:05:10.126 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 reinitialization... 00:05:10.126 spdk_app_start is called in Round 1. 00:05:10.126 Shutdown signal received, stop current app iteration 00:05:10.126 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 reinitialization... 00:05:10.126 spdk_app_start is called in Round 2. 00:05:10.126 Shutdown signal received, stop current app iteration 00:05:10.126 Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 reinitialization... 00:05:10.126 spdk_app_start is called in Round 3. 00:05:10.126 Shutdown signal received, stop current app iteration 00:05:10.385 21:06:04 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:10.385 21:06:04 -- event/event.sh@42 -- # return 0 00:05:10.385 00:05:10.385 real 0m15.809s 00:05:10.385 user 0m33.161s 00:05:10.385 sys 0m2.139s 00:05:10.385 21:06:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.385 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.385 ************************************ 00:05:10.385 END TEST app_repeat 00:05:10.385 ************************************ 00:05:10.385 21:06:04 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:10.385 21:06:04 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.385 21:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.385 21:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.385 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.385 ************************************ 00:05:10.385 START TEST cpu_locks 00:05:10.385 ************************************ 00:05:10.385 21:06:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:10.385 * Looking for test storage... 00:05:10.385 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:10.385 21:06:04 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:10.385 21:06:04 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:10.385 21:06:04 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:10.385 21:06:04 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:10.385 21:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.385 21:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.385 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.645 ************************************ 00:05:10.645 START TEST default_locks 00:05:10.645 ************************************ 00:05:10.645 21:06:04 -- common/autotest_common.sh@1111 -- # default_locks 00:05:10.645 21:06:04 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1228825 00:05:10.645 21:06:04 -- event/cpu_locks.sh@47 -- # waitforlisten 1228825 00:05:10.645 21:06:04 -- common/autotest_common.sh@817 -- # '[' -z 1228825 ']' 00:05:10.645 21:06:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.645 21:06:04 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.645 21:06:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.645 21:06:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.645 21:06:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.645 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.645 [2024-04-23 21:06:04.849726] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:10.645 [2024-04-23 21:06:04.849864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228825 ] 00:05:10.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.902 [2024-04-23 21:06:04.984443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.902 [2024-04-23 21:06:05.079656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.469 21:06:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.469 21:06:05 -- common/autotest_common.sh@850 -- # return 0 00:05:11.469 21:06:05 -- event/cpu_locks.sh@49 -- # locks_exist 1228825 00:05:11.469 21:06:05 -- event/cpu_locks.sh@22 -- # lslocks -p 1228825 00:05:11.469 21:06:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.469 lslocks: write error 00:05:11.469 21:06:05 -- event/cpu_locks.sh@50 -- # killprocess 1228825 00:05:11.469 21:06:05 -- common/autotest_common.sh@936 -- # '[' -z 1228825 ']' 00:05:11.469 21:06:05 -- common/autotest_common.sh@940 -- # kill -0 1228825 00:05:11.469 21:06:05 -- common/autotest_common.sh@941 -- # uname 00:05:11.469 21:06:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.469 21:06:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1228825 00:05:11.728 21:06:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.728 21:06:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.728 21:06:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1228825' 00:05:11.728 killing process with pid 1228825 00:05:11.728 21:06:05 -- common/autotest_common.sh@955 -- # kill 1228825 00:05:11.728 21:06:05 -- common/autotest_common.sh@960 -- # wait 1228825 00:05:12.666 21:06:06 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1228825 00:05:12.666 21:06:06 -- common/autotest_common.sh@638 -- # local es=0 00:05:12.666 21:06:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1228825 00:05:12.666 21:06:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:12.666 21:06:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.666 21:06:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:12.666 21:06:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.666 21:06:06 -- common/autotest_common.sh@641 -- # waitforlisten 1228825 00:05:12.666 21:06:06 -- common/autotest_common.sh@817 -- # '[' -z 1228825 ']' 00:05:12.666 21:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.666 21:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.666 21:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.666 21:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.666 21:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.666 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1228825) - No such process 00:05:12.666 ERROR: process (pid: 1228825) is no longer running 00:05:12.666 21:06:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.666 21:06:06 -- common/autotest_common.sh@850 -- # return 1 00:05:12.666 21:06:06 -- common/autotest_common.sh@641 -- # es=1 00:05:12.666 21:06:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:12.666 21:06:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:12.666 21:06:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:12.666 21:06:06 -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.666 21:06:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.666 21:06:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.666 21:06:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.666 00:05:12.666 real 0m1.883s 00:05:12.666 user 0m1.769s 00:05:12.666 sys 0m0.557s 00:05:12.666 21:06:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.666 21:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.666 ************************************ 00:05:12.666 END TEST default_locks 00:05:12.666 ************************************ 00:05:12.666 21:06:06 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.666 21:06:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.666 21:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.666 21:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.666 ************************************ 00:05:12.666 START TEST default_locks_via_rpc 00:05:12.666 ************************************ 00:05:12.666 21:06:06 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:12.666 21:06:06 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1229582 00:05:12.666 21:06:06 -- event/cpu_locks.sh@63 -- # waitforlisten 1229582 00:05:12.666 21:06:06 -- common/autotest_common.sh@817 -- # '[' -z 1229582 ']' 00:05:12.666 21:06:06 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.666 21:06:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.666 21:06:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.666 21:06:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.666 21:06:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.666 21:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.666 [2024-04-23 21:06:06.874337] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:12.666 [2024-04-23 21:06:06.874472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229582 ] 00:05:12.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.926 [2024-04-23 21:06:07.009302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.926 [2024-04-23 21:06:07.103163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.494 21:06:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.494 21:06:07 -- common/autotest_common.sh@850 -- # return 0 00:05:13.494 21:06:07 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.494 21:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.494 21:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:13.494 21:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.494 21:06:07 -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.494 21:06:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.494 21:06:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.494 21:06:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.494 21:06:07 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.494 21:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.494 21:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:13.494 21:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.494 21:06:07 -- event/cpu_locks.sh@71 -- # locks_exist 1229582 00:05:13.494 21:06:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1229582 00:05:13.495 21:06:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.495 21:06:07 -- event/cpu_locks.sh@73 -- # killprocess 1229582 00:05:13.495 21:06:07 -- common/autotest_common.sh@936 -- # '[' -z 1229582 ']' 00:05:13.495 21:06:07 -- common/autotest_common.sh@940 -- # kill -0 1229582 00:05:13.495 21:06:07 -- common/autotest_common.sh@941 -- # uname 00:05:13.495 21:06:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.495 21:06:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1229582 00:05:13.754 21:06:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.754 21:06:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.754 21:06:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1229582' 00:05:13.754 killing process with pid 1229582 00:05:13.754 21:06:07 -- common/autotest_common.sh@955 -- # kill 1229582 00:05:13.754 21:06:07 -- common/autotest_common.sh@960 -- # wait 1229582 00:05:14.691 00:05:14.691 real 0m1.878s 00:05:14.691 user 0m1.810s 00:05:14.691 sys 0m0.520s 00:05:14.691 21:06:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.691 21:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:14.691 ************************************ 00:05:14.691 END TEST default_locks_via_rpc 00:05:14.691 ************************************ 00:05:14.691 21:06:08 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.691 21:06:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.691 21:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.691 21:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:14.691 ************************************ 00:05:14.691 START TEST non_locking_app_on_locked_coremask 00:05:14.691 ************************************ 00:05:14.691 21:06:08 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:14.691 21:06:08 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1230095 00:05:14.691 21:06:08 -- event/cpu_locks.sh@81 -- # waitforlisten 1230095 /var/tmp/spdk.sock 00:05:14.691 21:06:08 -- common/autotest_common.sh@817 -- # '[' -z 1230095 ']' 00:05:14.691 21:06:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.691 21:06:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.691 21:06:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.691 21:06:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.691 21:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:14.691 21:06:08 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.691 [2024-04-23 21:06:08.866486] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:14.691 [2024-04-23 21:06:08.866596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230095 ] 00:05:14.691 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.951 [2024-04-23 21:06:08.983619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.951 [2024-04-23 21:06:09.077683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.519 21:06:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.519 21:06:09 -- common/autotest_common.sh@850 -- # return 0 00:05:15.519 21:06:09 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1230239 00:05:15.519 21:06:09 -- event/cpu_locks.sh@85 -- # waitforlisten 1230239 /var/tmp/spdk2.sock 00:05:15.519 21:06:09 -- common/autotest_common.sh@817 -- # '[' -z 1230239 ']' 00:05:15.519 21:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.519 21:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.519 21:06:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.519 21:06:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.519 21:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:15.519 21:06:09 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.519 [2024-04-23 21:06:09.644001] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:15.519 [2024-04-23 21:06:09.644110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230239 ] 00:05:15.519 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.778 [2024-04-23 21:06:09.797765] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.778 [2024-04-23 21:06:09.797803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.778 [2024-04-23 21:06:09.976969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.740 21:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.740 21:06:10 -- common/autotest_common.sh@850 -- # return 0 00:05:16.740 21:06:10 -- event/cpu_locks.sh@87 -- # locks_exist 1230095 00:05:16.740 21:06:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1230095 00:05:16.740 21:06:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.740 lslocks: write error 00:05:16.740 21:06:10 -- event/cpu_locks.sh@89 -- # killprocess 1230095 00:05:16.740 21:06:10 -- common/autotest_common.sh@936 -- # '[' -z 1230095 ']' 00:05:16.740 21:06:10 -- common/autotest_common.sh@940 -- # kill -0 1230095 00:05:16.740 21:06:10 -- common/autotest_common.sh@941 -- # uname 00:05:16.740 21:06:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.740 21:06:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230095 00:05:16.999 21:06:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.999 21:06:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.999 21:06:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230095' 00:05:16.999 killing process with pid 1230095 00:05:16.999 21:06:11 -- common/autotest_common.sh@955 -- # kill 1230095 00:05:16.999 21:06:11 -- common/autotest_common.sh@960 -- # wait 1230095 00:05:18.903 21:06:12 -- event/cpu_locks.sh@90 -- # killprocess 1230239 00:05:18.903 21:06:12 -- common/autotest_common.sh@936 -- # '[' -z 1230239 ']' 00:05:18.903 21:06:12 -- common/autotest_common.sh@940 -- # kill -0 1230239 00:05:18.903 21:06:12 -- common/autotest_common.sh@941 -- # uname 00:05:18.903 21:06:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:18.903 21:06:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1230239 00:05:18.903 21:06:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:18.903 21:06:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:18.903 21:06:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1230239' 00:05:18.903 killing process with pid 1230239 00:05:18.903 21:06:12 -- common/autotest_common.sh@955 -- # kill 1230239 00:05:18.903 21:06:12 -- common/autotest_common.sh@960 -- # wait 1230239 00:05:19.472 00:05:19.472 real 0m4.786s 00:05:19.472 user 0m4.815s 00:05:19.472 sys 0m1.000s 00:05:19.472 21:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.472 21:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 ************************************ 00:05:19.472 END TEST non_locking_app_on_locked_coremask 00:05:19.472 ************************************ 00:05:19.472 21:06:13 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.472 21:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.472 21:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.472 21:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 ************************************ 00:05:19.472 START TEST locking_app_on_unlocked_coremask 00:05:19.472 ************************************ 00:05:19.472 21:06:13 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:19.472 21:06:13 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1231160 00:05:19.472 21:06:13 -- event/cpu_locks.sh@99 -- # waitforlisten 1231160 /var/tmp/spdk.sock 00:05:19.472 21:06:13 -- common/autotest_common.sh@817 -- # '[' -z 1231160 ']' 00:05:19.472 21:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.472 21:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:19.472 21:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.472 21:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:19.472 21:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.472 21:06:13 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.731 [2024-04-23 21:06:13.817724] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:19.731 [2024-04-23 21:06:13.817860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231160 ] 00:05:19.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.731 [2024-04-23 21:06:13.949615] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.731 [2024-04-23 21:06:13.949706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.990 [2024-04-23 21:06:14.047034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.249 21:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.249 21:06:14 -- common/autotest_common.sh@850 -- # return 0 00:05:20.249 21:06:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1231181 00:05:20.249 21:06:14 -- event/cpu_locks.sh@103 -- # waitforlisten 1231181 /var/tmp/spdk2.sock 00:05:20.249 21:06:14 -- common/autotest_common.sh@817 -- # '[' -z 1231181 ']' 00:05:20.249 21:06:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.249 21:06:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.249 21:06:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.249 21:06:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.249 21:06:14 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.249 21:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.508 [2024-04-23 21:06:14.615810] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:20.508 [2024-04-23 21:06:14.615954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231181 ] 00:05:20.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.767 [2024-04-23 21:06:14.796812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.767 [2024-04-23 21:06:14.992476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.703 21:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:21.703 21:06:15 -- common/autotest_common.sh@850 -- # return 0 00:05:21.703 21:06:15 -- event/cpu_locks.sh@105 -- # locks_exist 1231181 00:05:21.703 21:06:15 -- event/cpu_locks.sh@22 -- # lslocks -p 1231181 00:05:21.703 21:06:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.962 lslocks: write error 00:05:21.962 21:06:16 -- event/cpu_locks.sh@107 -- # killprocess 1231160 00:05:21.962 21:06:16 -- common/autotest_common.sh@936 -- # '[' -z 1231160 ']' 00:05:21.962 21:06:16 -- common/autotest_common.sh@940 -- # kill -0 1231160 00:05:21.962 21:06:16 -- common/autotest_common.sh@941 -- # uname 00:05:21.962 21:06:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.962 21:06:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231160 00:05:21.962 21:06:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.962 21:06:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.962 21:06:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231160' 00:05:21.962 killing process with pid 1231160 00:05:21.962 21:06:16 -- common/autotest_common.sh@955 -- # kill 1231160 00:05:21.962 21:06:16 -- common/autotest_common.sh@960 -- # wait 1231160 00:05:23.866 21:06:17 -- event/cpu_locks.sh@108 -- # killprocess 1231181 00:05:23.866 21:06:17 -- common/autotest_common.sh@936 -- # '[' -z 1231181 ']' 00:05:23.866 21:06:17 -- common/autotest_common.sh@940 -- # kill -0 1231181 00:05:23.866 21:06:17 -- common/autotest_common.sh@941 -- # uname 00:05:23.866 21:06:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.866 21:06:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1231181 00:05:23.866 21:06:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.866 21:06:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.866 21:06:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1231181' 00:05:23.866 killing process with pid 1231181 00:05:23.866 21:06:17 -- common/autotest_common.sh@955 -- # kill 1231181 00:05:23.866 21:06:17 -- common/autotest_common.sh@960 -- # wait 1231181 00:05:24.804 00:05:24.804 real 0m4.999s 00:05:24.804 user 0m5.055s 00:05:24.804 sys 0m1.070s 00:05:24.804 21:06:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.804 21:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 END TEST locking_app_on_unlocked_coremask 00:05:24.804 ************************************ 00:05:24.804 21:06:18 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.804 21:06:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.804 21:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.804 21:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 START TEST locking_app_on_locked_coremask 00:05:24.804 ************************************ 00:05:24.804 21:06:18 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:24.804 21:06:18 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1232106 00:05:24.804 21:06:18 -- event/cpu_locks.sh@116 -- # waitforlisten 1232106 /var/tmp/spdk.sock 00:05:24.804 21:06:18 -- common/autotest_common.sh@817 -- # '[' -z 1232106 ']' 00:05:24.804 21:06:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.804 21:06:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.804 21:06:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.804 21:06:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.804 21:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 21:06:18 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.804 [2024-04-23 21:06:18.951054] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:24.804 [2024-04-23 21:06:18.951184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232106 ] 00:05:24.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.064 [2024-04-23 21:06:19.081448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.064 [2024-04-23 21:06:19.172879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.631 21:06:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.631 21:06:19 -- common/autotest_common.sh@850 -- # return 0 00:05:25.631 21:06:19 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1232287 00:05:25.631 21:06:19 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1232287 /var/tmp/spdk2.sock 00:05:25.631 21:06:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:25.631 21:06:19 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1232287 /var/tmp/spdk2.sock 00:05:25.631 21:06:19 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:25.631 21:06:19 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.631 21:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.631 21:06:19 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:25.631 21:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.632 21:06:19 -- common/autotest_common.sh@641 -- # waitforlisten 1232287 /var/tmp/spdk2.sock 00:05:25.632 21:06:19 -- common/autotest_common.sh@817 -- # '[' -z 1232287 ']' 00:05:25.632 21:06:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.632 21:06:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.632 21:06:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.632 21:06:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.632 21:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:25.632 [2024-04-23 21:06:19.752224] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:25.632 [2024-04-23 21:06:19.752366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232287 ] 00:05:25.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.889 [2024-04-23 21:06:19.922442] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1232106 has claimed it. 00:05:25.890 [2024-04-23 21:06:19.922495] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.148 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1232287) - No such process 00:05:26.148 ERROR: process (pid: 1232287) is no longer running 00:05:26.148 21:06:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.148 21:06:20 -- common/autotest_common.sh@850 -- # return 1 00:05:26.148 21:06:20 -- common/autotest_common.sh@641 -- # es=1 00:05:26.148 21:06:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:26.148 21:06:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:26.148 21:06:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:26.148 21:06:20 -- event/cpu_locks.sh@122 -- # locks_exist 1232106 00:05:26.148 21:06:20 -- event/cpu_locks.sh@22 -- # lslocks -p 1232106 00:05:26.148 21:06:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.407 lslocks: write error 00:05:26.407 21:06:20 -- event/cpu_locks.sh@124 -- # killprocess 1232106 00:05:26.407 21:06:20 -- common/autotest_common.sh@936 -- # '[' -z 1232106 ']' 00:05:26.407 21:06:20 -- common/autotest_common.sh@940 -- # kill -0 1232106 00:05:26.407 21:06:20 -- common/autotest_common.sh@941 -- # uname 00:05:26.407 21:06:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.407 21:06:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1232106 00:05:26.407 21:06:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.407 21:06:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.407 21:06:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1232106' 00:05:26.407 killing process with pid 1232106 00:05:26.407 21:06:20 -- common/autotest_common.sh@955 -- # kill 1232106 00:05:26.407 21:06:20 -- common/autotest_common.sh@960 -- # wait 1232106 00:05:27.345 00:05:27.345 real 0m2.517s 00:05:27.345 user 0m2.597s 00:05:27.345 sys 0m0.717s 00:05:27.345 21:06:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.345 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.345 ************************************ 00:05:27.345 END TEST locking_app_on_locked_coremask 00:05:27.345 ************************************ 00:05:27.345 21:06:21 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:27.345 21:06:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.345 21:06:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.345 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.345 ************************************ 00:05:27.345 START TEST locking_overlapped_coremask 00:05:27.345 ************************************ 00:05:27.345 21:06:21 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:27.345 21:06:21 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1232737 00:05:27.345 21:06:21 -- event/cpu_locks.sh@133 -- # waitforlisten 1232737 /var/tmp/spdk.sock 00:05:27.345 21:06:21 -- common/autotest_common.sh@817 -- # '[' -z 1232737 ']' 00:05:27.345 21:06:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.345 21:06:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.345 21:06:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.345 21:06:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.345 21:06:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.345 21:06:21 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:27.345 [2024-04-23 21:06:21.613441] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:27.346 [2024-04-23 21:06:21.613575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232737 ] 00:05:27.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.605 [2024-04-23 21:06:21.744576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.605 [2024-04-23 21:06:21.842028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.605 [2024-04-23 21:06:21.842047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.605 [2024-04-23 21:06:21.842051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.174 21:06:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.174 21:06:22 -- common/autotest_common.sh@850 -- # return 0 00:05:28.174 21:06:22 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1232757 00:05:28.174 21:06:22 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1232757 /var/tmp/spdk2.sock 00:05:28.174 21:06:22 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:28.174 21:06:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:28.174 21:06:22 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1232757 /var/tmp/spdk2.sock 00:05:28.174 21:06:22 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:28.174 21:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.174 21:06:22 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:28.174 21:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.174 21:06:22 -- common/autotest_common.sh@641 -- # waitforlisten 1232757 /var/tmp/spdk2.sock 00:05:28.174 21:06:22 -- common/autotest_common.sh@817 -- # '[' -z 1232757 ']' 00:05:28.174 21:06:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.174 21:06:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.174 21:06:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.174 21:06:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.174 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 [2024-04-23 21:06:22.418497] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:28.174 [2024-04-23 21:06:22.418647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232757 ] 00:05:28.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.433 [2024-04-23 21:06:22.593765] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1232737 has claimed it. 00:05:28.433 [2024-04-23 21:06:22.593814] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.000 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1232757) - No such process 00:05:29.000 ERROR: process (pid: 1232757) is no longer running 00:05:29.000 21:06:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.000 21:06:22 -- common/autotest_common.sh@850 -- # return 1 00:05:29.001 21:06:22 -- common/autotest_common.sh@641 -- # es=1 00:05:29.001 21:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:29.001 21:06:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:29.001 21:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:29.001 21:06:22 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.001 21:06:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.001 21:06:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.001 21:06:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.001 21:06:22 -- event/cpu_locks.sh@141 -- # killprocess 1232737 00:05:29.001 21:06:22 -- common/autotest_common.sh@936 -- # '[' -z 1232737 ']' 00:05:29.001 21:06:22 -- common/autotest_common.sh@940 -- # kill -0 1232737 00:05:29.001 21:06:22 -- common/autotest_common.sh@941 -- # uname 00:05:29.001 21:06:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.001 21:06:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1232737 00:05:29.001 21:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.001 21:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.001 21:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1232737' 00:05:29.001 killing process with pid 1232737 00:05:29.001 21:06:23 -- common/autotest_common.sh@955 -- # kill 1232737 00:05:29.001 21:06:23 -- common/autotest_common.sh@960 -- # wait 1232737 00:05:29.937 00:05:29.937 real 0m2.347s 00:05:29.937 user 0m6.079s 00:05:29.937 sys 0m0.576s 00:05:29.937 21:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.937 21:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.937 ************************************ 00:05:29.937 END TEST locking_overlapped_coremask 00:05:29.937 ************************************ 00:05:29.937 21:06:23 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:29.937 21:06:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.937 21:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.937 21:06:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.937 ************************************ 00:05:29.937 START TEST locking_overlapped_coremask_via_rpc 00:05:29.937 ************************************ 00:05:29.937 21:06:24 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:29.937 21:06:24 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1233111 00:05:29.938 21:06:24 -- event/cpu_locks.sh@149 -- # waitforlisten 1233111 /var/tmp/spdk.sock 00:05:29.938 21:06:24 -- common/autotest_common.sh@817 -- # '[' -z 1233111 ']' 00:05:29.938 21:06:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.938 21:06:24 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:29.938 21:06:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.938 21:06:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.938 21:06:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.938 21:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.938 [2024-04-23 21:06:24.087611] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:29.938 [2024-04-23 21:06:24.087724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233111 ] 00:05:29.938 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.938 [2024-04-23 21:06:24.207063] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.938 [2024-04-23 21:06:24.207094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.197 [2024-04-23 21:06:24.306732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.197 [2024-04-23 21:06:24.306828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.197 [2024-04-23 21:06:24.306836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.765 21:06:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.765 21:06:24 -- common/autotest_common.sh@850 -- # return 0 00:05:30.765 21:06:24 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1233385 00:05:30.766 21:06:24 -- event/cpu_locks.sh@153 -- # waitforlisten 1233385 /var/tmp/spdk2.sock 00:05:30.766 21:06:24 -- common/autotest_common.sh@817 -- # '[' -z 1233385 ']' 00:05:30.766 21:06:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.766 21:06:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.766 21:06:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.766 21:06:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.766 21:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.766 21:06:24 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.766 [2024-04-23 21:06:24.895342] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:30.766 [2024-04-23 21:06:24.895487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233385 ] 00:05:30.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.025 [2024-04-23 21:06:25.066534] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.025 [2024-04-23 21:06:25.066575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.025 [2024-04-23 21:06:25.249809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.025 [2024-04-23 21:06:25.249940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.025 [2024-04-23 21:06:25.249974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.984 21:06:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.984 21:06:25 -- common/autotest_common.sh@850 -- # return 0 00:05:31.984 21:06:25 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.984 21:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.984 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:31.984 21:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.984 21:06:25 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.984 21:06:25 -- common/autotest_common.sh@638 -- # local es=0 00:05:31.984 21:06:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.984 21:06:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:31.984 21:06:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.984 21:06:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:31.984 21:06:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.984 21:06:25 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.984 21:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.984 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:31.984 [2024-04-23 21:06:25.952749] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1233111 has claimed it. 00:05:31.984 request: 00:05:31.984 { 00:05:31.984 "method": "framework_enable_cpumask_locks", 00:05:31.984 "req_id": 1 00:05:31.984 } 00:05:31.984 Got JSON-RPC error response 00:05:31.984 response: 00:05:31.984 { 00:05:31.984 "code": -32603, 00:05:31.984 "message": "Failed to claim CPU core: 2" 00:05:31.984 } 00:05:31.984 21:06:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:31.984 21:06:25 -- common/autotest_common.sh@641 -- # es=1 00:05:31.984 21:06:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:31.984 21:06:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:31.984 21:06:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:31.984 21:06:25 -- event/cpu_locks.sh@158 -- # waitforlisten 1233111 /var/tmp/spdk.sock 00:05:31.984 21:06:25 -- common/autotest_common.sh@817 -- # '[' -z 1233111 ']' 00:05:31.984 21:06:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.984 21:06:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.984 21:06:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.984 21:06:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.984 21:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:31.984 21:06:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.984 21:06:26 -- common/autotest_common.sh@850 -- # return 0 00:05:31.984 21:06:26 -- event/cpu_locks.sh@159 -- # waitforlisten 1233385 /var/tmp/spdk2.sock 00:05:31.984 21:06:26 -- common/autotest_common.sh@817 -- # '[' -z 1233385 ']' 00:05:31.984 21:06:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.984 21:06:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.984 21:06:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.984 21:06:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.984 21:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.243 21:06:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.243 21:06:26 -- common/autotest_common.sh@850 -- # return 0 00:05:32.243 21:06:26 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:32.243 21:06:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.243 21:06:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.243 21:06:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.243 00:05:32.243 real 0m2.286s 00:05:32.243 user 0m0.704s 00:05:32.243 sys 0m0.164s 00:05:32.243 21:06:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.243 21:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:32.243 ************************************ 00:05:32.243 END TEST locking_overlapped_coremask_via_rpc 00:05:32.243 ************************************ 00:05:32.243 21:06:26 -- event/cpu_locks.sh@174 -- # cleanup 00:05:32.243 21:06:26 -- event/cpu_locks.sh@15 -- # [[ -z 1233111 ]] 00:05:32.243 21:06:26 -- event/cpu_locks.sh@15 -- # killprocess 1233111 00:05:32.243 21:06:26 -- common/autotest_common.sh@936 -- # '[' -z 1233111 ']' 00:05:32.243 21:06:26 -- common/autotest_common.sh@940 -- # kill -0 1233111 00:05:32.243 21:06:26 -- common/autotest_common.sh@941 -- # uname 00:05:32.243 21:06:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.243 21:06:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1233111 00:05:32.243 21:06:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.243 21:06:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.243 21:06:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1233111' 00:05:32.243 killing process with pid 1233111 00:05:32.243 21:06:26 -- common/autotest_common.sh@955 -- # kill 1233111 00:05:32.243 21:06:26 -- common/autotest_common.sh@960 -- # wait 1233111 00:05:33.180 21:06:27 -- event/cpu_locks.sh@16 -- # [[ -z 1233385 ]] 00:05:33.180 21:06:27 -- event/cpu_locks.sh@16 -- # killprocess 1233385 00:05:33.180 21:06:27 -- common/autotest_common.sh@936 -- # '[' -z 1233385 ']' 00:05:33.180 21:06:27 -- common/autotest_common.sh@940 -- # kill -0 1233385 00:05:33.180 21:06:27 -- common/autotest_common.sh@941 -- # uname 00:05:33.180 21:06:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.180 21:06:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1233385 00:05:33.180 21:06:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.180 21:06:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.180 21:06:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1233385' 00:05:33.180 killing process with pid 1233385 00:05:33.180 21:06:27 -- common/autotest_common.sh@955 -- # kill 1233385 00:05:33.180 21:06:27 -- common/autotest_common.sh@960 -- # wait 1233385 00:05:34.119 21:06:28 -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.119 21:06:28 -- event/cpu_locks.sh@1 -- # cleanup 00:05:34.119 21:06:28 -- event/cpu_locks.sh@15 -- # [[ -z 1233111 ]] 00:05:34.119 21:06:28 -- event/cpu_locks.sh@15 -- # killprocess 1233111 00:05:34.119 21:06:28 -- common/autotest_common.sh@936 -- # '[' -z 1233111 ']' 00:05:34.119 21:06:28 -- common/autotest_common.sh@940 -- # kill -0 1233111 00:05:34.119 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1233111) - No such process 00:05:34.119 21:06:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1233111 is not found' 00:05:34.119 Process with pid 1233111 is not found 00:05:34.119 21:06:28 -- event/cpu_locks.sh@16 -- # [[ -z 1233385 ]] 00:05:34.119 21:06:28 -- event/cpu_locks.sh@16 -- # killprocess 1233385 00:05:34.119 21:06:28 -- common/autotest_common.sh@936 -- # '[' -z 1233385 ']' 00:05:34.119 21:06:28 -- common/autotest_common.sh@940 -- # kill -0 1233385 00:05:34.119 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1233385) - No such process 00:05:34.119 21:06:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1233385 is not found' 00:05:34.119 Process with pid 1233385 is not found 00:05:34.119 21:06:28 -- event/cpu_locks.sh@18 -- # rm -f 00:05:34.119 00:05:34.119 real 0m23.545s 00:05:34.119 user 0m37.608s 00:05:34.119 sys 0m5.990s 00:05:34.119 21:06:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.119 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:34.119 ************************************ 00:05:34.119 END TEST cpu_locks 00:05:34.119 ************************************ 00:05:34.119 00:05:34.119 real 0m47.120s 00:05:34.119 user 1m22.921s 00:05:34.119 sys 0m9.501s 00:05:34.119 21:06:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.119 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:34.119 ************************************ 00:05:34.119 END TEST event 00:05:34.119 ************************************ 00:05:34.119 21:06:28 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:34.119 21:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.119 21:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.119 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:34.119 ************************************ 00:05:34.119 START TEST thread 00:05:34.119 ************************************ 00:05:34.119 21:06:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:05:34.119 * Looking for test storage... 00:05:34.119 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:05:34.119 21:06:28 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.119 21:06:28 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:34.119 21:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.119 21:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:34.378 ************************************ 00:05:34.378 START TEST thread_poller_perf 00:05:34.378 ************************************ 00:05:34.378 21:06:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:34.378 [2024-04-23 21:06:28.494786] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:34.378 [2024-04-23 21:06:28.494924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234096 ] 00:05:34.378 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.378 [2024-04-23 21:06:28.627250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.637 [2024-04-23 21:06:28.725697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.637 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:36.016 ====================================== 00:05:36.016 busy:1906031464 (cyc) 00:05:36.016 total_run_count: 390000 00:05:36.016 tsc_hz: 1900000000 (cyc) 00:05:36.016 ====================================== 00:05:36.016 poller_cost: 4887 (cyc), 2572 (nsec) 00:05:36.016 00:05:36.016 real 0m1.432s 00:05:36.016 user 0m1.272s 00:05:36.016 sys 0m0.152s 00:05:36.016 21:06:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.016 21:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.016 ************************************ 00:05:36.016 END TEST thread_poller_perf 00:05:36.016 ************************************ 00:05:36.016 21:06:29 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.016 21:06:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:36.016 21:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.016 21:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.016 ************************************ 00:05:36.016 START TEST thread_poller_perf 00:05:36.016 ************************************ 00:05:36.016 21:06:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:36.016 [2024-04-23 21:06:30.039393] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:36.016 [2024-04-23 21:06:30.039539] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234416 ] 00:05:36.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.016 [2024-04-23 21:06:30.174062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.016 [2024-04-23 21:06:30.268151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.016 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.395 ====================================== 00:05:37.395 busy:1901970970 (cyc) 00:05:37.395 total_run_count: 5344000 00:05:37.395 tsc_hz: 1900000000 (cyc) 00:05:37.395 ====================================== 00:05:37.395 poller_cost: 355 (cyc), 186 (nsec) 00:05:37.395 00:05:37.395 real 0m1.424s 00:05:37.395 user 0m1.276s 00:05:37.395 sys 0m0.140s 00:05:37.395 21:06:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.396 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 END TEST thread_poller_perf 00:05:37.396 ************************************ 00:05:37.396 21:06:31 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:37.396 00:05:37.396 real 0m3.182s 00:05:37.396 user 0m2.657s 00:05:37.396 sys 0m0.502s 00:05:37.396 21:06:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.396 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 END TEST thread 00:05:37.396 ************************************ 00:05:37.396 21:06:31 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:37.396 21:06:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.396 21:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.396 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 ************************************ 00:05:37.396 START TEST accel 00:05:37.396 ************************************ 00:05:37.396 21:06:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:05:37.396 * Looking for test storage... 00:05:37.396 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:05:37.396 21:06:31 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:37.396 21:06:31 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:37.396 21:06:31 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:37.396 21:06:31 -- accel/accel.sh@62 -- # spdk_tgt_pid=1234792 00:05:37.396 21:06:31 -- accel/accel.sh@63 -- # waitforlisten 1234792 00:05:37.396 21:06:31 -- common/autotest_common.sh@817 -- # '[' -z 1234792 ']' 00:05:37.396 21:06:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.396 21:06:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.396 21:06:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.396 21:06:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.396 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:37.396 21:06:31 -- accel/accel.sh@61 -- # build_accel_config 00:05:37.396 21:06:31 -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:37.396 21:06:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.396 21:06:31 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:37.396 21:06:31 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:37.396 21:06:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:37.396 21:06:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:37.396 21:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.396 21:06:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.396 21:06:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.396 21:06:31 -- accel/accel.sh@41 -- # jq -r . 00:05:37.656 [2024-04-23 21:06:31.753346] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:37.656 [2024-04-23 21:06:31.753486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234792 ] 00:05:37.656 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.656 [2024-04-23 21:06:31.887214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.979 [2024-04-23 21:06:31.984816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.979 [2024-04-23 21:06:31.989404] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:37.979 [2024-04-23 21:06:31.997336] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:46.128 21:06:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.128 21:06:39 -- common/autotest_common.sh@850 -- # return 0 00:05:46.128 21:06:39 -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:05:46.128 21:06:39 -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:05:46.128 21:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.128 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:05:46.128 21:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.128 "method": "dsa_scan_accel_module", 00:05:46.128 21:06:39 -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:05:46.128 21:06:39 -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:05:46.128 21:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.128 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:05:46.128 21:06:39 -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:05:46.128 21:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.128 "method": "iaa_scan_accel_module" 00:05:46.128 21:06:39 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:46.128 21:06:39 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:46.128 21:06:39 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:46.128 21:06:39 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:46.128 21:06:39 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:46.128 21:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.128 21:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.128 21:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.128 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.128 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.128 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.128 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.128 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.128 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.128 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.128 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:46.129 21:06:39 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # IFS== 00:05:46.129 21:06:39 -- accel/accel.sh@72 -- # read -r opc module 00:05:46.129 21:06:39 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:05:46.129 21:06:39 -- accel/accel.sh@75 -- # killprocess 1234792 00:05:46.129 21:06:39 -- common/autotest_common.sh@936 -- # '[' -z 1234792 ']' 00:05:46.129 21:06:39 -- common/autotest_common.sh@940 -- # kill -0 1234792 00:05:46.129 21:06:39 -- common/autotest_common.sh@941 -- # uname 00:05:46.129 21:06:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.129 21:06:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1234792 00:05:46.129 21:06:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.129 21:06:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.129 21:06:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1234792' 00:05:46.129 killing process with pid 1234792 00:05:46.129 21:06:39 -- common/autotest_common.sh@955 -- # kill 1234792 00:05:46.129 21:06:39 -- common/autotest_common.sh@960 -- # wait 1234792 00:05:48.661 21:06:42 -- accel/accel.sh@76 -- # trap - ERR 00:05:48.661 21:06:42 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:48.661 21:06:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:48.661 21:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.661 21:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:48.661 21:06:42 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:48.661 21:06:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:48.661 21:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.661 21:06:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.661 21:06:42 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:48.661 21:06:42 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:48.661 21:06:42 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:48.661 21:06:42 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:48.661 21:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.661 21:06:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.661 21:06:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.661 21:06:42 -- accel/accel.sh@41 -- # jq -r . 00:05:48.661 21:06:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.661 21:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:48.661 21:06:42 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:48.661 21:06:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.662 21:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.662 21:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:48.662 ************************************ 00:05:48.662 START TEST accel_missing_filename 00:05:48.662 ************************************ 00:05:48.662 21:06:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:48.662 21:06:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:48.662 21:06:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:48.662 21:06:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:48.662 21:06:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:48.662 21:06:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:48.662 21:06:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:48.662 21:06:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:48.662 21:06:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:48.662 21:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.662 21:06:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.662 21:06:42 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:48.662 21:06:42 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:48.662 21:06:42 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:48.662 21:06:42 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:48.662 21:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.662 21:06:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.662 21:06:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.662 21:06:42 -- accel/accel.sh@41 -- # jq -r . 00:05:48.662 [2024-04-23 21:06:42.688538] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:48.662 [2024-04-23 21:06:42.688752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237094 ] 00:05:48.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.662 [2024-04-23 21:06:42.806388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.662 [2024-04-23 21:06:42.901966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.662 [2024-04-23 21:06:42.906478] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:48.662 [2024-04-23 21:06:42.914434] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:05:55.224 [2024-04-23 21:06:49.296129] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.126 [2024-04-23 21:06:51.141884] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:57.126 A filename is required. 00:05:57.126 21:06:51 -- common/autotest_common.sh@641 -- # es=234 00:05:57.126 21:06:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.126 21:06:51 -- common/autotest_common.sh@650 -- # es=106 00:05:57.126 21:06:51 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:57.126 21:06:51 -- common/autotest_common.sh@658 -- # es=1 00:05:57.126 21:06:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.126 00:05:57.126 real 0m8.650s 00:05:57.126 user 0m2.259s 00:05:57.126 sys 0m0.248s 00:05:57.126 21:06:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.126 21:06:51 -- common/autotest_common.sh@10 -- # set +x 00:05:57.126 ************************************ 00:05:57.126 END TEST accel_missing_filename 00:05:57.126 ************************************ 00:05:57.126 21:06:51 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:57.126 21:06:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:57.126 21:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.126 21:06:51 -- common/autotest_common.sh@10 -- # set +x 00:05:57.402 ************************************ 00:05:57.402 START TEST accel_compress_verify 00:05:57.402 ************************************ 00:05:57.402 21:06:51 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:57.402 21:06:51 -- common/autotest_common.sh@638 -- # local es=0 00:05:57.402 21:06:51 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:57.402 21:06:51 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:57.402 21:06:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.402 21:06:51 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:57.402 21:06:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.402 21:06:51 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:57.402 21:06:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:05:57.402 21:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.402 21:06:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.402 21:06:51 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:05:57.402 21:06:51 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:05:57.402 21:06:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:05:57.402 21:06:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:05:57.402 21:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.402 21:06:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.402 21:06:51 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.402 21:06:51 -- accel/accel.sh@41 -- # jq -r . 00:05:57.402 [2024-04-23 21:06:51.467733] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:05:57.402 [2024-04-23 21:06:51.467806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238721 ] 00:05:57.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.402 [2024-04-23 21:06:51.560010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.402 [2024-04-23 21:06:51.660369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.402 [2024-04-23 21:06:51.664964] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:05:57.402 [2024-04-23 21:06:51.672930] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:03.987 [2024-04-23 21:06:58.079989] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.890 [2024-04-23 21:06:59.931153] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:05.890 00:06:05.890 Compression does not support the verify option, aborting. 00:06:05.890 21:07:00 -- common/autotest_common.sh@641 -- # es=161 00:06:05.890 21:07:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.890 21:07:00 -- common/autotest_common.sh@650 -- # es=33 00:06:05.890 21:07:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:05.890 21:07:00 -- common/autotest_common.sh@658 -- # es=1 00:06:05.890 21:07:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.890 00:06:05.890 real 0m8.656s 00:06:05.890 user 0m2.301s 00:06:05.890 sys 0m0.215s 00:06:05.890 21:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.890 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:05.890 ************************************ 00:06:05.890 END TEST accel_compress_verify 00:06:05.890 ************************************ 00:06:05.890 21:07:00 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.890 21:07:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.890 21:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.890 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.148 ************************************ 00:06:06.148 START TEST accel_wrong_workload 00:06:06.148 ************************************ 00:06:06.148 21:07:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:06.148 21:07:00 -- common/autotest_common.sh@638 -- # local es=0 00:06:06.148 21:07:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.148 21:07:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:06.148 21:07:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.148 21:07:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:06.148 21:07:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.148 21:07:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:06.148 21:07:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.148 21:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.148 21:07:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.148 21:07:00 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:06.148 21:07:00 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:06.148 21:07:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:06.148 21:07:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:06.148 21:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.148 21:07:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.148 21:07:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.148 21:07:00 -- accel/accel.sh@41 -- # jq -r . 00:06:06.148 Unsupported workload type: foobar 00:06:06.148 [2024-04-23 21:07:00.276550] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:06.148 accel_perf options: 00:06:06.148 [-h help message] 00:06:06.148 [-q queue depth per core] 00:06:06.148 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.148 [-T number of threads per core 00:06:06.148 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.148 [-t time in seconds] 00:06:06.148 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.148 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.148 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.148 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.148 [-S for crc32c workload, use this seed value (default 0) 00:06:06.148 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.148 [-f for fill workload, use this BYTE value (default 255) 00:06:06.148 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.148 [-y verify result if this switch is on] 00:06:06.148 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.148 Can be used to spread operations across a wider range of memory. 00:06:06.148 21:07:00 -- common/autotest_common.sh@641 -- # es=1 00:06:06.148 21:07:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:06.148 21:07:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:06.148 21:07:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:06.148 00:06:06.148 real 0m0.064s 00:06:06.148 user 0m0.054s 00:06:06.148 sys 0m0.042s 00:06:06.148 21:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.148 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.148 ************************************ 00:06:06.148 END TEST accel_wrong_workload 00:06:06.148 ************************************ 00:06:06.148 21:07:00 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.148 21:07:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:06.148 21:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.148 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.148 ************************************ 00:06:06.148 START TEST accel_negative_buffers 00:06:06.148 ************************************ 00:06:06.408 21:07:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.408 21:07:00 -- common/autotest_common.sh@638 -- # local es=0 00:06:06.408 21:07:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:06.408 21:07:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:06.408 21:07:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.408 21:07:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:06.408 21:07:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.408 21:07:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:06.408 21:07:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:06.408 21:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.408 21:07:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.408 21:07:00 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:06.408 21:07:00 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:06.408 21:07:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:06.408 21:07:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:06.408 21:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.408 21:07:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.408 21:07:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.408 21:07:00 -- accel/accel.sh@41 -- # jq -r . 00:06:06.408 -x option must be non-negative. 00:06:06.408 [2024-04-23 21:07:00.459102] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:06.408 accel_perf options: 00:06:06.408 [-h help message] 00:06:06.408 [-q queue depth per core] 00:06:06.408 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.408 [-T number of threads per core 00:06:06.408 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.408 [-t time in seconds] 00:06:06.408 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.408 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.408 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.408 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.408 [-S for crc32c workload, use this seed value (default 0) 00:06:06.408 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.408 [-f for fill workload, use this BYTE value (default 255) 00:06:06.408 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.408 [-y verify result if this switch is on] 00:06:06.408 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.408 Can be used to spread operations across a wider range of memory. 00:06:06.408 21:07:00 -- common/autotest_common.sh@641 -- # es=1 00:06:06.408 21:07:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:06.408 21:07:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:06.408 21:07:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:06.408 00:06:06.408 real 0m0.056s 00:06:06.408 user 0m0.058s 00:06:06.408 sys 0m0.029s 00:06:06.408 21:07:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.408 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 ************************************ 00:06:06.408 END TEST accel_negative_buffers 00:06:06.408 ************************************ 00:06:06.408 21:07:00 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:06.408 21:07:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:06.408 21:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.408 21:07:00 -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 ************************************ 00:06:06.408 START TEST accel_crc32c 00:06:06.408 ************************************ 00:06:06.408 21:07:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:06.408 21:07:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.408 21:07:00 -- accel/accel.sh@17 -- # local accel_module 00:06:06.408 21:07:00 -- accel/accel.sh@19 -- # IFS=: 00:06:06.408 21:07:00 -- accel/accel.sh@19 -- # read -r var val 00:06:06.408 21:07:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:06.408 21:07:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:06.408 21:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.408 21:07:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.408 21:07:00 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:06.408 21:07:00 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:06.408 21:07:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:06.408 21:07:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:06.408 21:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.409 21:07:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.409 21:07:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.409 21:07:00 -- accel/accel.sh@41 -- # jq -r . 00:06:06.409 [2024-04-23 21:07:00.620821] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:06.409 [2024-04-23 21:07:00.620925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240581 ] 00:06:06.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.667 [2024-04-23 21:07:00.741720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.667 [2024-04-23 21:07:00.840779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.667 [2024-04-23 21:07:00.845289] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:06.667 [2024-04-23 21:07:00.853260] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=0x1 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=crc32c 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=32 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=dsa 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=32 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=32 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=1 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val=Yes 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:13.237 21:07:07 -- accel/accel.sh@20 -- # val= 00:06:13.237 21:07:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # IFS=: 00:06:13.237 21:07:07 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@20 -- # val= 00:06:16.521 21:07:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:16.521 21:07:10 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:16.521 21:07:10 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:16.521 00:06:16.521 real 0m9.694s 00:06:16.521 user 0m3.283s 00:06:16.521 sys 0m0.253s 00:06:16.521 21:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.521 21:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.521 ************************************ 00:06:16.521 END TEST accel_crc32c 00:06:16.521 ************************************ 00:06:16.521 21:07:10 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:16.521 21:07:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:16.521 21:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.521 21:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:16.521 ************************************ 00:06:16.521 START TEST accel_crc32c_C2 00:06:16.521 ************************************ 00:06:16.521 21:07:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:16.521 21:07:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.521 21:07:10 -- accel/accel.sh@17 -- # local accel_module 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # IFS=: 00:06:16.521 21:07:10 -- accel/accel.sh@19 -- # read -r var val 00:06:16.521 21:07:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:16.521 21:07:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:16.521 21:07:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.521 21:07:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.521 21:07:10 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:16.521 21:07:10 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:16.521 21:07:10 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:16.521 21:07:10 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:16.521 21:07:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.521 21:07:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.521 21:07:10 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.521 21:07:10 -- accel/accel.sh@41 -- # jq -r . 00:06:16.521 [2024-04-23 21:07:10.466968] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:16.521 [2024-04-23 21:07:10.467105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242647 ] 00:06:16.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.521 [2024-04-23 21:07:10.567401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.521 [2024-04-23 21:07:10.662116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.521 [2024-04-23 21:07:10.666612] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:16.521 [2024-04-23 21:07:10.674580] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=0x1 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=crc32c 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=0 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=dsa 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=32 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=32 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=1 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val=Yes 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:23.199 21:07:17 -- accel/accel.sh@20 -- # val= 00:06:23.199 21:07:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # IFS=: 00:06:23.199 21:07:17 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@20 -- # val= 00:06:26.500 21:07:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.500 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.500 21:07:20 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:26.500 21:07:20 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:26.500 21:07:20 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:26.500 00:06:26.500 real 0m9.686s 00:06:26.500 user 0m3.283s 00:06:26.500 sys 0m0.238s 00:06:26.500 21:07:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.500 21:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.500 ************************************ 00:06:26.500 END TEST accel_crc32c_C2 00:06:26.500 ************************************ 00:06:26.500 21:07:20 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:26.500 21:07:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:26.500 21:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.500 21:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:26.500 ************************************ 00:06:26.500 START TEST accel_copy 00:06:26.500 ************************************ 00:06:26.501 21:07:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:26.501 21:07:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.501 21:07:20 -- accel/accel.sh@17 -- # local accel_module 00:06:26.501 21:07:20 -- accel/accel.sh@19 -- # IFS=: 00:06:26.501 21:07:20 -- accel/accel.sh@19 -- # read -r var val 00:06:26.501 21:07:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:26.501 21:07:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:26.501 21:07:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.501 21:07:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.501 21:07:20 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:26.501 21:07:20 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:26.501 21:07:20 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:26.501 21:07:20 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:26.501 21:07:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.501 21:07:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.501 21:07:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.501 21:07:20 -- accel/accel.sh@41 -- # jq -r . 00:06:26.501 [2024-04-23 21:07:20.290271] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:26.501 [2024-04-23 21:07:20.290408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244465 ] 00:06:26.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.501 [2024-04-23 21:07:20.423290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.501 [2024-04-23 21:07:20.519826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.501 [2024-04-23 21:07:20.524461] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:26.501 [2024-04-23 21:07:20.532415] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val=0x1 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val=copy 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.099 21:07:26 -- accel/accel.sh@20 -- # val=dsa 00:06:33.099 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.099 21:07:26 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.099 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val=32 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val=32 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val=1 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val=Yes 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:33.100 21:07:26 -- accel/accel.sh@20 -- # val= 00:06:33.100 21:07:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # IFS=: 00:06:33.100 21:07:26 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@20 -- # val= 00:06:36.408 21:07:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:29 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:29 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:36.408 21:07:29 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:36.408 21:07:29 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:36.408 00:06:36.408 real 0m9.703s 00:06:36.408 user 0m3.250s 00:06:36.408 sys 0m0.280s 00:06:36.408 21:07:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.408 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.408 ************************************ 00:06:36.408 END TEST accel_copy 00:06:36.408 ************************************ 00:06:36.408 21:07:29 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.408 21:07:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:36.408 21:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.408 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:06:36.408 ************************************ 00:06:36.408 START TEST accel_fill 00:06:36.408 ************************************ 00:06:36.408 21:07:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.408 21:07:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.408 21:07:30 -- accel/accel.sh@17 -- # local accel_module 00:06:36.408 21:07:30 -- accel/accel.sh@19 -- # IFS=: 00:06:36.408 21:07:30 -- accel/accel.sh@19 -- # read -r var val 00:06:36.408 21:07:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.408 21:07:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.408 21:07:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.408 21:07:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.408 21:07:30 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:36.408 21:07:30 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:36.408 21:07:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:36.408 21:07:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:36.408 21:07:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.408 21:07:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.408 21:07:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.408 21:07:30 -- accel/accel.sh@41 -- # jq -r . 00:06:36.408 [2024-04-23 21:07:30.106304] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:36.408 [2024-04-23 21:07:30.106407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246529 ] 00:06:36.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.408 [2024-04-23 21:07:30.225241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.408 [2024-04-23 21:07:30.322117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.408 [2024-04-23 21:07:30.326613] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:36.408 [2024-04-23 21:07:30.334580] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=0x1 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=fill 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=0x80 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=dsa 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=64 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=64 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=1 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val=Yes 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 21:07:36 -- accel/accel.sh@20 -- # val= 00:06:42.997 21:07:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 21:07:36 -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.546 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.546 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.546 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.546 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.546 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.546 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.547 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 21:07:39 -- accel/accel.sh@20 -- # val= 00:06:45.547 21:07:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.547 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.547 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.547 21:07:39 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:45.547 21:07:39 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:45.547 21:07:39 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:45.547 00:06:45.547 real 0m9.673s 00:06:45.547 user 0m3.274s 00:06:45.547 sys 0m0.239s 00:06:45.547 21:07:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.547 21:07:39 -- common/autotest_common.sh@10 -- # set +x 00:06:45.547 ************************************ 00:06:45.547 END TEST accel_fill 00:06:45.547 ************************************ 00:06:45.547 21:07:39 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:45.547 21:07:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.547 21:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.547 21:07:39 -- common/autotest_common.sh@10 -- # set +x 00:06:45.808 ************************************ 00:06:45.808 START TEST accel_copy_crc32c 00:06:45.808 ************************************ 00:06:45.808 21:07:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.808 21:07:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.808 21:07:39 -- accel/accel.sh@17 -- # local accel_module 00:06:45.808 21:07:39 -- accel/accel.sh@19 -- # IFS=: 00:06:45.808 21:07:39 -- accel/accel.sh@19 -- # read -r var val 00:06:45.808 21:07:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.808 21:07:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.808 21:07:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.808 21:07:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.808 21:07:39 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:45.808 21:07:39 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:45.808 21:07:39 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:45.808 21:07:39 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:45.808 21:07:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.808 21:07:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.808 21:07:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.808 21:07:39 -- accel/accel.sh@41 -- # jq -r . 00:06:45.808 [2024-04-23 21:07:39.885584] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:45.808 [2024-04-23 21:07:39.885786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248411 ] 00:06:45.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.808 [2024-04-23 21:07:40.007147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.068 [2024-04-23 21:07:40.124295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.068 [2024-04-23 21:07:40.128827] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:46.068 [2024-04-23 21:07:40.136789] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:52.660 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.660 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.660 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.660 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.660 21:07:46 -- accel/accel.sh@20 -- # val=0x1 00:06:52.660 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.660 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.660 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.660 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.660 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.660 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=0 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=dsa 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=32 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=32 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=1 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val=Yes 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:52.661 21:07:46 -- accel/accel.sh@20 -- # val= 00:06:52.661 21:07:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # IFS=: 00:06:52.661 21:07:46 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@20 -- # val= 00:06:55.965 21:07:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.965 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.965 21:07:49 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:55.965 21:07:49 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.965 21:07:49 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:55.965 00:06:55.965 real 0m9.682s 00:06:55.965 user 0m3.271s 00:06:55.965 sys 0m0.241s 00:06:55.965 21:07:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.965 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.965 ************************************ 00:06:55.965 END TEST accel_copy_crc32c 00:06:55.965 ************************************ 00:06:55.965 21:07:49 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.965 21:07:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:55.965 21:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.965 21:07:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.965 ************************************ 00:06:55.965 START TEST accel_copy_crc32c_C2 00:06:55.965 ************************************ 00:06:55.966 21:07:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.966 21:07:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.966 21:07:49 -- accel/accel.sh@17 -- # local accel_module 00:06:55.966 21:07:49 -- accel/accel.sh@19 -- # IFS=: 00:06:55.966 21:07:49 -- accel/accel.sh@19 -- # read -r var val 00:06:55.966 21:07:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:55.966 21:07:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:55.966 21:07:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.966 21:07:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.966 21:07:49 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:55.966 21:07:49 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:55.966 21:07:49 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:55.966 21:07:49 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:55.966 21:07:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.966 21:07:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.966 21:07:49 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.966 21:07:49 -- accel/accel.sh@41 -- # jq -r . 00:06:55.966 [2024-04-23 21:07:49.669972] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:06:55.966 [2024-04-23 21:07:49.670074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250422 ] 00:06:55.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.966 [2024-04-23 21:07:49.785892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.966 [2024-04-23 21:07:49.875940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.966 [2024-04-23 21:07:49.880423] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:55.966 [2024-04-23 21:07:49.888392] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=0x1 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=0 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=dsa 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=32 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=32 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=1 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val=Yes 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.623 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:02.623 21:07:56 -- accel/accel.sh@20 -- # val= 00:07:02.623 21:07:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.624 21:07:56 -- accel/accel.sh@19 -- # IFS=: 00:07:02.624 21:07:56 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@20 -- # val= 00:07:05.172 21:07:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:05.172 21:07:59 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.172 21:07:59 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:05.172 00:07:05.172 real 0m9.678s 00:07:05.172 user 0m3.255s 00:07:05.172 sys 0m0.250s 00:07:05.172 21:07:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.172 21:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:05.172 ************************************ 00:07:05.172 END TEST accel_copy_crc32c_C2 00:07:05.172 ************************************ 00:07:05.172 21:07:59 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.172 21:07:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:05.172 21:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.172 21:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:05.172 ************************************ 00:07:05.172 START TEST accel_dualcast 00:07:05.172 ************************************ 00:07:05.172 21:07:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:05.172 21:07:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.172 21:07:59 -- accel/accel.sh@17 -- # local accel_module 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # IFS=: 00:07:05.172 21:07:59 -- accel/accel.sh@19 -- # read -r var val 00:07:05.172 21:07:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.173 21:07:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.173 21:07:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.173 21:07:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.173 21:07:59 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:05.173 21:07:59 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:05.173 21:07:59 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:05.173 21:07:59 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:05.173 21:07:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.173 21:07:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.173 21:07:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.173 21:07:59 -- accel/accel.sh@41 -- # jq -r . 00:07:05.173 [2024-04-23 21:07:59.434817] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:05.173 [2024-04-23 21:07:59.434883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252324 ] 00:07:05.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.435 [2024-04-23 21:07:59.523833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.435 [2024-04-23 21:07:59.619386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.435 [2024-04-23 21:07:59.623982] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:05.435 [2024-04-23 21:07:59.631949] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=0x1 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=dualcast 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=dsa 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=32 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=32 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=1 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val=Yes 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:12.055 21:08:06 -- accel/accel.sh@20 -- # val= 00:07:12.055 21:08:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # IFS=: 00:07:12.055 21:08:06 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@20 -- # val= 00:07:15.365 21:08:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:15.365 21:08:09 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:15.365 21:08:09 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:15.365 00:07:15.365 real 0m9.642s 00:07:15.365 user 0m3.261s 00:07:15.365 sys 0m0.210s 00:07:15.365 21:08:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.365 21:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:15.365 ************************************ 00:07:15.365 END TEST accel_dualcast 00:07:15.365 ************************************ 00:07:15.365 21:08:09 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:15.365 21:08:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.365 21:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.365 21:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:15.365 ************************************ 00:07:15.365 START TEST accel_compare 00:07:15.365 ************************************ 00:07:15.365 21:08:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:15.365 21:08:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.365 21:08:09 -- accel/accel.sh@17 -- # local accel_module 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # IFS=: 00:07:15.365 21:08:09 -- accel/accel.sh@19 -- # read -r var val 00:07:15.365 21:08:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:15.365 21:08:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:15.365 21:08:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.365 21:08:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.365 21:08:09 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:15.365 21:08:09 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:15.365 21:08:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:15.365 21:08:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:15.365 21:08:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.365 21:08:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.365 21:08:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:15.365 21:08:09 -- accel/accel.sh@41 -- # jq -r . 00:07:15.365 [2024-04-23 21:08:09.179398] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:15.365 [2024-04-23 21:08:09.179503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254298 ] 00:07:15.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.365 [2024-04-23 21:08:09.293767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.365 [2024-04-23 21:08:09.383356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.365 [2024-04-23 21:08:09.387845] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:15.365 [2024-04-23 21:08:09.395812] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=0x1 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=compare 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=dsa 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=32 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=32 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=1 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val=Yes 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.952 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.952 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.952 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.953 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:21.953 21:08:15 -- accel/accel.sh@20 -- # val= 00:07:21.953 21:08:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.953 21:08:15 -- accel/accel.sh@19 -- # IFS=: 00:07:21.953 21:08:15 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@20 -- # val= 00:07:25.254 21:08:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:25.254 21:08:18 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:25.254 21:08:18 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:25.254 00:07:25.254 real 0m9.684s 00:07:25.254 user 0m3.281s 00:07:25.254 sys 0m0.230s 00:07:25.254 21:08:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.254 21:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 ************************************ 00:07:25.254 END TEST accel_compare 00:07:25.254 ************************************ 00:07:25.254 21:08:18 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:25.254 21:08:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.254 21:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.254 21:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 ************************************ 00:07:25.254 START TEST accel_xor 00:07:25.254 ************************************ 00:07:25.254 21:08:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:25.254 21:08:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.254 21:08:18 -- accel/accel.sh@17 -- # local accel_module 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # IFS=: 00:07:25.254 21:08:18 -- accel/accel.sh@19 -- # read -r var val 00:07:25.254 21:08:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:25.254 21:08:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:25.254 21:08:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.254 21:08:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.254 21:08:18 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:25.254 21:08:18 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:25.254 21:08:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:25.254 21:08:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:25.254 21:08:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.255 21:08:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.255 21:08:18 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.255 21:08:18 -- accel/accel.sh@41 -- # jq -r . 00:07:25.255 [2024-04-23 21:08:18.960011] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:25.255 [2024-04-23 21:08:18.960119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256259 ] 00:07:25.255 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.255 [2024-04-23 21:08:19.083251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.255 [2024-04-23 21:08:19.182164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.255 [2024-04-23 21:08:19.186689] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:25.255 [2024-04-23 21:08:19.194642] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=0x1 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=xor 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=2 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=software 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=32 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=32 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=1 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val=Yes 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.848 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:31.848 21:08:25 -- accel/accel.sh@20 -- # val= 00:07:31.848 21:08:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.849 21:08:25 -- accel/accel.sh@19 -- # IFS=: 00:07:31.849 21:08:25 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@20 -- # val= 00:07:34.496 21:08:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.496 21:08:28 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:34.496 21:08:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.496 00:07:34.496 real 0m9.687s 00:07:34.496 user 0m3.293s 00:07:34.496 sys 0m0.219s 00:07:34.496 21:08:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.496 21:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:34.496 ************************************ 00:07:34.496 END TEST accel_xor 00:07:34.496 ************************************ 00:07:34.496 21:08:28 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:34.496 21:08:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:34.496 21:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.496 21:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:34.496 ************************************ 00:07:34.496 START TEST accel_xor 00:07:34.496 ************************************ 00:07:34.496 21:08:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:34.496 21:08:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.496 21:08:28 -- accel/accel.sh@17 -- # local accel_module 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # IFS=: 00:07:34.496 21:08:28 -- accel/accel.sh@19 -- # read -r var val 00:07:34.496 21:08:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:34.496 21:08:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:34.496 21:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.496 21:08:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.496 21:08:28 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:34.496 21:08:28 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:34.496 21:08:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:34.496 21:08:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:34.496 21:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.496 21:08:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.496 21:08:28 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.496 21:08:28 -- accel/accel.sh@41 -- # jq -r . 00:07:34.496 [2024-04-23 21:08:28.735609] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:34.496 [2024-04-23 21:08:28.735679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258179 ] 00:07:34.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.757 [2024-04-23 21:08:28.824391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.757 [2024-04-23 21:08:28.914471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.757 [2024-04-23 21:08:28.918957] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:34.757 [2024-04-23 21:08:28.926924] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=0x1 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=xor 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=3 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=software 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=32 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=32 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=1 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val=Yes 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:41.350 21:08:35 -- accel/accel.sh@20 -- # val= 00:07:41.350 21:08:35 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # IFS=: 00:07:41.350 21:08:35 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@20 -- # val= 00:07:44.650 21:08:38 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.650 21:08:38 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:44.650 21:08:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.650 00:07:44.650 real 0m9.658s 00:07:44.650 user 0m3.290s 00:07:44.650 sys 0m0.196s 00:07:44.650 21:08:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.650 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.650 ************************************ 00:07:44.650 END TEST accel_xor 00:07:44.650 ************************************ 00:07:44.650 21:08:38 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:44.650 21:08:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:44.650 21:08:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.650 21:08:38 -- common/autotest_common.sh@10 -- # set +x 00:07:44.650 ************************************ 00:07:44.650 START TEST accel_dif_verify 00:07:44.650 ************************************ 00:07:44.650 21:08:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:44.650 21:08:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.650 21:08:38 -- accel/accel.sh@17 -- # local accel_module 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # IFS=: 00:07:44.650 21:08:38 -- accel/accel.sh@19 -- # read -r var val 00:07:44.650 21:08:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:44.650 21:08:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:44.650 21:08:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.650 21:08:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.650 21:08:38 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:44.650 21:08:38 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:44.650 21:08:38 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:44.650 21:08:38 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:44.650 21:08:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.650 21:08:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.650 21:08:38 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.650 21:08:38 -- accel/accel.sh@41 -- # jq -r . 00:07:44.650 [2024-04-23 21:08:38.513119] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:44.650 [2024-04-23 21:08:38.513221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260231 ] 00:07:44.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.650 [2024-04-23 21:08:38.624644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.650 [2024-04-23 21:08:38.720084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.650 [2024-04-23 21:08:38.724549] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:44.650 [2024-04-23 21:08:38.732519] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=0x1 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=dif_verify 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=dsa 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=32 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=32 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=1 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val=No 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:51.234 21:08:45 -- accel/accel.sh@20 -- # val= 00:07:51.234 21:08:45 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # IFS=: 00:07:51.234 21:08:45 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@20 -- # val= 00:07:54.531 21:08:48 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:54.531 21:08:48 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:54.531 21:08:48 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:54.531 00:07:54.531 real 0m9.668s 00:07:54.531 user 0m3.277s 00:07:54.531 sys 0m0.224s 00:07:54.531 21:08:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.531 21:08:48 -- common/autotest_common.sh@10 -- # set +x 00:07:54.531 ************************************ 00:07:54.531 END TEST accel_dif_verify 00:07:54.531 ************************************ 00:07:54.531 21:08:48 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:54.531 21:08:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:54.531 21:08:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.531 21:08:48 -- common/autotest_common.sh@10 -- # set +x 00:07:54.531 ************************************ 00:07:54.531 START TEST accel_dif_generate 00:07:54.531 ************************************ 00:07:54.531 21:08:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:54.531 21:08:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.531 21:08:48 -- accel/accel.sh@17 -- # local accel_module 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 21:08:48 -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 21:08:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:54.531 21:08:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:54.531 21:08:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.531 21:08:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.531 21:08:48 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:54.531 21:08:48 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:54.531 21:08:48 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:54.531 21:08:48 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:54.531 21:08:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.532 21:08:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.532 21:08:48 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.532 21:08:48 -- accel/accel.sh@41 -- # jq -r . 00:07:54.532 [2024-04-23 21:08:48.321102] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:07:54.532 [2024-04-23 21:08:48.321236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262070 ] 00:07:54.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.532 [2024-04-23 21:08:48.453502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.532 [2024-04-23 21:08:48.552466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.532 [2024-04-23 21:08:48.557004] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:54.532 [2024-04-23 21:08:48.564959] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:01.107 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.107 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.107 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.107 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.107 21:08:54 -- accel/accel.sh@20 -- # val=0x1 00:08:01.107 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.107 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=dif_generate 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=software 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=32 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=32 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=1 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val=No 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:01.108 21:08:54 -- accel/accel.sh@20 -- # val= 00:08:01.108 21:08:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # IFS=: 00:08:01.108 21:08:54 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@20 -- # val= 00:08:04.408 21:08:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:57 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.408 21:08:57 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:04.408 21:08:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.408 00:08:04.408 real 0m9.716s 00:08:04.408 user 0m3.280s 00:08:04.408 sys 0m0.274s 00:08:04.408 21:08:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.408 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.408 ************************************ 00:08:04.408 END TEST accel_dif_generate 00:08:04.408 ************************************ 00:08:04.408 21:08:58 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:04.408 21:08:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:04.408 21:08:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.408 21:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:04.408 ************************************ 00:08:04.408 START TEST accel_dif_generate_copy 00:08:04.408 ************************************ 00:08:04.408 21:08:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:04.408 21:08:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.408 21:08:58 -- accel/accel.sh@17 -- # local accel_module 00:08:04.408 21:08:58 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 21:08:58 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 21:08:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:04.408 21:08:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.408 21:08:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:04.408 21:08:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.408 21:08:58 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:04.408 21:08:58 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:04.408 21:08:58 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:04.408 21:08:58 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:04.408 21:08:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.408 21:08:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.408 21:08:58 -- accel/accel.sh@40 -- # local IFS=, 00:08:04.408 21:08:58 -- accel/accel.sh@41 -- # jq -r . 00:08:04.408 [2024-04-23 21:08:58.124919] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:04.408 [2024-04-23 21:08:58.125019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264156 ] 00:08:04.408 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.408 [2024-04-23 21:08:58.236790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.408 [2024-04-23 21:08:58.331507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.408 [2024-04-23 21:08:58.336000] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:04.408 [2024-04-23 21:08:58.343958] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.982 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.982 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val=0x1 00:08:10.982 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.982 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.982 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.982 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.982 21:09:04 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val=dsa 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val=32 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val=32 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val=1 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val=No 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:10.983 21:09:04 -- accel/accel.sh@20 -- # val= 00:08:10.983 21:09:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # IFS=: 00:08:10.983 21:09:04 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@20 -- # val= 00:08:13.529 21:09:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.529 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.529 21:09:07 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:13.529 21:09:07 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:13.529 21:09:07 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:13.529 00:08:13.529 real 0m9.663s 00:08:13.529 user 0m3.266s 00:08:13.529 sys 0m0.225s 00:08:13.529 21:09:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.529 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:08:13.529 ************************************ 00:08:13.529 END TEST accel_dif_generate_copy 00:08:13.529 ************************************ 00:08:13.529 21:09:07 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:13.529 21:09:07 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:13.529 21:09:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:13.529 21:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.529 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 ************************************ 00:08:13.791 START TEST accel_comp 00:08:13.791 ************************************ 00:08:13.791 21:09:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:13.791 21:09:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.791 21:09:07 -- accel/accel.sh@17 -- # local accel_module 00:08:13.791 21:09:07 -- accel/accel.sh@19 -- # IFS=: 00:08:13.791 21:09:07 -- accel/accel.sh@19 -- # read -r var val 00:08:13.791 21:09:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:13.791 21:09:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:13.791 21:09:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.791 21:09:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.791 21:09:07 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:13.791 21:09:07 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:13.791 21:09:07 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:13.791 21:09:07 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:13.791 21:09:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.791 21:09:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.791 21:09:07 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.791 21:09:07 -- accel/accel.sh@41 -- # jq -r . 00:08:13.791 [2024-04-23 21:09:07.892272] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:13.791 [2024-04-23 21:09:07.892376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266523 ] 00:08:13.791 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.791 [2024-04-23 21:09:08.006469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.052 [2024-04-23 21:09:08.104043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.052 [2024-04-23 21:09:08.108520] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:14.052 [2024-04-23 21:09:08.116484] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=0x1 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=compress 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=iaa 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=32 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=32 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=1 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val=No 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:20.646 21:09:14 -- accel/accel.sh@20 -- # val= 00:08:20.646 21:09:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # IFS=: 00:08:20.646 21:09:14 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@20 -- # val= 00:08:23.949 21:09:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:23.949 21:09:17 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:23.949 21:09:17 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:23.949 00:08:23.949 real 0m9.676s 00:08:23.949 user 0m3.277s 00:08:23.949 sys 0m0.227s 00:08:23.949 21:09:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.949 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:23.949 ************************************ 00:08:23.949 END TEST accel_comp 00:08:23.949 ************************************ 00:08:23.949 21:09:17 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:23.949 21:09:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:23.949 21:09:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.949 21:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:23.949 ************************************ 00:08:23.949 START TEST accel_decomp 00:08:23.949 ************************************ 00:08:23.949 21:09:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:23.949 21:09:17 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.949 21:09:17 -- accel/accel.sh@17 -- # local accel_module 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # IFS=: 00:08:23.949 21:09:17 -- accel/accel.sh@19 -- # read -r var val 00:08:23.949 21:09:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:23.949 21:09:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:23.949 21:09:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.949 21:09:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.949 21:09:17 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:23.949 21:09:17 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:23.949 21:09:17 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:23.949 21:09:17 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:23.949 21:09:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.949 21:09:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.949 21:09:17 -- accel/accel.sh@40 -- # local IFS=, 00:08:23.949 21:09:17 -- accel/accel.sh@41 -- # jq -r . 00:08:23.949 [2024-04-23 21:09:17.662454] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:23.949 [2024-04-23 21:09:17.662522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268475 ] 00:08:23.949 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.949 [2024-04-23 21:09:17.753235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.949 [2024-04-23 21:09:17.846458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.949 [2024-04-23 21:09:17.850958] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:23.949 [2024-04-23 21:09:17.858921] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=0x1 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=decompress 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=iaa 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=32 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=32 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=1 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val=Yes 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:30.542 21:09:24 -- accel/accel.sh@20 -- # val= 00:08:30.542 21:09:24 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # IFS=: 00:08:30.542 21:09:24 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@20 -- # val= 00:08:33.090 21:09:27 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.090 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.090 21:09:27 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:33.090 21:09:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.090 21:09:27 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:33.090 00:08:33.090 real 0m9.624s 00:08:33.090 user 0m3.252s 00:08:33.090 sys 0m0.209s 00:08:33.090 21:09:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.090 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 ************************************ 00:08:33.090 END TEST accel_decomp 00:08:33.090 ************************************ 00:08:33.090 21:09:27 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.090 21:09:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:33.090 21:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.090 21:09:27 -- common/autotest_common.sh@10 -- # set +x 00:08:33.350 ************************************ 00:08:33.350 START TEST accel_decmop_full 00:08:33.350 ************************************ 00:08:33.350 21:09:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.350 21:09:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.350 21:09:27 -- accel/accel.sh@17 -- # local accel_module 00:08:33.350 21:09:27 -- accel/accel.sh@19 -- # IFS=: 00:08:33.350 21:09:27 -- accel/accel.sh@19 -- # read -r var val 00:08:33.350 21:09:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.351 21:09:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.351 21:09:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.351 21:09:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.351 21:09:27 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:33.351 21:09:27 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:33.351 21:09:27 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:33.351 21:09:27 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:33.351 21:09:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.351 21:09:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.351 21:09:27 -- accel/accel.sh@40 -- # local IFS=, 00:08:33.351 21:09:27 -- accel/accel.sh@41 -- # jq -r . 00:08:33.351 [2024-04-23 21:09:27.400561] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:33.351 [2024-04-23 21:09:27.400664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270402 ] 00:08:33.351 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.351 [2024-04-23 21:09:27.511217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.351 [2024-04-23 21:09:27.607666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.351 [2024-04-23 21:09:27.612130] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:33.351 [2024-04-23 21:09:27.620100] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=0x1 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=decompress 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=iaa 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=32 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=32 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=1 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val=Yes 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:39.927 21:09:34 -- accel/accel.sh@20 -- # val= 00:08:39.927 21:09:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # IFS=: 00:08:39.927 21:09:34 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@20 -- # val= 00:08:43.227 21:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:43.227 21:09:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:43.227 21:09:37 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:43.227 00:08:43.227 real 0m9.701s 00:08:43.227 user 0m3.315s 00:08:43.227 sys 0m0.222s 00:08:43.227 21:09:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.227 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:08:43.227 ************************************ 00:08:43.227 END TEST accel_decmop_full 00:08:43.227 ************************************ 00:08:43.227 21:09:37 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:43.227 21:09:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:43.227 21:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.227 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:08:43.227 ************************************ 00:08:43.227 START TEST accel_decomp_mcore 00:08:43.227 ************************************ 00:08:43.227 21:09:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:43.227 21:09:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.227 21:09:37 -- accel/accel.sh@17 -- # local accel_module 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # IFS=: 00:08:43.227 21:09:37 -- accel/accel.sh@19 -- # read -r var val 00:08:43.227 21:09:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:43.227 21:09:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:43.227 21:09:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.227 21:09:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:43.227 21:09:37 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:43.227 21:09:37 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:43.227 21:09:37 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:43.227 21:09:37 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:43.227 21:09:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.227 21:09:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:43.227 21:09:37 -- accel/accel.sh@40 -- # local IFS=, 00:08:43.227 21:09:37 -- accel/accel.sh@41 -- # jq -r . 00:08:43.227 [2024-04-23 21:09:37.204490] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:43.227 [2024-04-23 21:09:37.204554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272471 ] 00:08:43.227 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.227 [2024-04-23 21:09:37.291300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.227 [2024-04-23 21:09:37.390027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.227 [2024-04-23 21:09:37.390127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.227 [2024-04-23 21:09:37.390242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.227 [2024-04-23 21:09:37.390250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.227 [2024-04-23 21:09:37.394816] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:43.227 [2024-04-23 21:09:37.402779] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=0xf 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=decompress 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=iaa 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=32 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=32 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=1 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val=Yes 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:49.815 21:09:43 -- accel/accel.sh@20 -- # val= 00:08:49.815 21:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # IFS=: 00:08:49.815 21:09:43 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.117 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.117 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.117 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.118 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.118 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.118 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.118 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.118 21:09:46 -- accel/accel.sh@20 -- # val= 00:08:53.118 21:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.118 21:09:46 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:53.118 21:09:46 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:53.118 21:09:46 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:53.118 00:08:53.118 real 0m9.677s 00:08:53.118 user 0m31.088s 00:08:53.118 sys 0m0.218s 00:08:53.118 21:09:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.118 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:08:53.118 ************************************ 00:08:53.118 END TEST accel_decomp_mcore 00:08:53.118 ************************************ 00:08:53.118 21:09:46 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.118 21:09:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:53.118 21:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.118 21:09:46 -- common/autotest_common.sh@10 -- # set +x 00:08:53.118 ************************************ 00:08:53.118 START TEST accel_decomp_full_mcore 00:08:53.118 ************************************ 00:08:53.118 21:09:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.118 21:09:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:53.118 21:09:46 -- accel/accel.sh@17 -- # local accel_module 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # IFS=: 00:08:53.118 21:09:46 -- accel/accel.sh@19 -- # read -r var val 00:08:53.118 21:09:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.118 21:09:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.118 21:09:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.118 21:09:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.118 21:09:46 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:53.118 21:09:46 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:53.118 21:09:46 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:53.118 21:09:46 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:53.118 21:09:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.118 21:09:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.118 21:09:46 -- accel/accel.sh@40 -- # local IFS=, 00:08:53.118 21:09:46 -- accel/accel.sh@41 -- # jq -r . 00:08:53.118 [2024-04-23 21:09:47.000831] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:08:53.118 [2024-04-23 21:09:47.000932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274286 ] 00:08:53.118 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.118 [2024-04-23 21:09:47.116666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.118 [2024-04-23 21:09:47.210657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.118 [2024-04-23 21:09:47.210747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.118 [2024-04-23 21:09:47.210869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.118 [2024-04-23 21:09:47.210878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.118 [2024-04-23 21:09:47.215405] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:53.118 [2024-04-23 21:09:47.223367] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=0xf 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=decompress 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=iaa 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=32 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=32 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=1 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val=Yes 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:08:59.832 21:09:53 -- accel/accel.sh@20 -- # val= 00:08:59.832 21:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # IFS=: 00:08:59.832 21:09:53 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.399 21:09:56 -- accel/accel.sh@20 -- # val= 00:09:02.399 21:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.399 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.668 21:09:56 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:02.668 21:09:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:02.668 21:09:56 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:02.668 00:09:02.668 real 0m9.702s 00:09:02.668 user 0m31.087s 00:09:02.668 sys 0m0.232s 00:09:02.668 21:09:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.668 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:02.668 ************************************ 00:09:02.668 END TEST accel_decomp_full_mcore 00:09:02.668 ************************************ 00:09:02.668 21:09:56 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:02.668 21:09:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:02.668 21:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.668 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:02.668 ************************************ 00:09:02.668 START TEST accel_decomp_mthread 00:09:02.668 ************************************ 00:09:02.668 21:09:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:02.668 21:09:56 -- accel/accel.sh@16 -- # local accel_opc 00:09:02.668 21:09:56 -- accel/accel.sh@17 -- # local accel_module 00:09:02.668 21:09:56 -- accel/accel.sh@19 -- # IFS=: 00:09:02.668 21:09:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:02.668 21:09:56 -- accel/accel.sh@19 -- # read -r var val 00:09:02.668 21:09:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:02.668 21:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.668 21:09:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:02.668 21:09:56 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:02.668 21:09:56 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:02.668 21:09:56 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:02.668 21:09:56 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:02.668 21:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.668 21:09:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:02.668 21:09:56 -- accel/accel.sh@40 -- # local IFS=, 00:09:02.668 21:09:56 -- accel/accel.sh@41 -- # jq -r . 00:09:02.668 [2024-04-23 21:09:56.839926] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:09:02.668 [2024-04-23 21:09:56.840038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276241 ] 00:09:02.668 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.930 [2024-04-23 21:09:56.960973] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.930 [2024-04-23 21:09:57.052809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.930 [2024-04-23 21:09:57.057297] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:02.930 [2024-04-23 21:09:57.065266] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=0x1 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=decompress 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=iaa 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=32 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=32 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=2 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val=Yes 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.572 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.572 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.572 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.573 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:09.573 21:10:03 -- accel/accel.sh@20 -- # val= 00:09:09.573 21:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.573 21:10:03 -- accel/accel.sh@19 -- # IFS=: 00:09:09.573 21:10:03 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@20 -- # val= 00:09:12.874 21:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.874 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.874 21:10:06 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:12.874 21:10:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:12.874 21:10:06 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:12.874 00:09:12.874 real 0m9.674s 00:09:12.874 user 0m3.277s 00:09:12.874 sys 0m0.232s 00:09:12.875 21:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:12.875 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:12.875 ************************************ 00:09:12.875 END TEST accel_decomp_mthread 00:09:12.875 ************************************ 00:09:12.875 21:10:06 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:12.875 21:10:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:12.875 21:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.875 21:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:12.875 ************************************ 00:09:12.875 START TEST accel_deomp_full_mthread 00:09:12.875 ************************************ 00:09:12.875 21:10:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:12.875 21:10:06 -- accel/accel.sh@16 -- # local accel_opc 00:09:12.875 21:10:06 -- accel/accel.sh@17 -- # local accel_module 00:09:12.875 21:10:06 -- accel/accel.sh@19 -- # IFS=: 00:09:12.875 21:10:06 -- accel/accel.sh@19 -- # read -r var val 00:09:12.875 21:10:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:12.875 21:10:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:12.875 21:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:09:12.875 21:10:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:12.875 21:10:06 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:12.875 21:10:06 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:12.875 21:10:06 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:12.875 21:10:06 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:12.875 21:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.875 21:10:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:12.875 21:10:06 -- accel/accel.sh@40 -- # local IFS=, 00:09:12.875 21:10:06 -- accel/accel.sh@41 -- # jq -r . 00:09:12.875 [2024-04-23 21:10:06.652120] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:09:12.875 [2024-04-23 21:10:06.652249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278210 ] 00:09:12.875 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.875 [2024-04-23 21:10:06.782118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.875 [2024-04-23 21:10:06.877842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.875 [2024-04-23 21:10:06.882390] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:12.875 [2024-04-23 21:10:06.890343] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=0x1 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=decompress 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=iaa 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=32 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=32 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=2 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val=Yes 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:19.464 21:10:13 -- accel/accel.sh@20 -- # val= 00:09:19.464 21:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # IFS=: 00:09:19.464 21:10:13 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@20 -- # val= 00:09:22.759 21:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 21:10:16 -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 21:10:16 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:22.759 21:10:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:22.759 21:10:16 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:22.759 00:09:22.759 real 0m9.730s 00:09:22.759 user 0m3.314s 00:09:22.759 sys 0m0.244s 00:09:22.759 21:10:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.759 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.759 ************************************ 00:09:22.759 END TEST accel_deomp_full_mthread 00:09:22.759 ************************************ 00:09:22.759 21:10:16 -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:22.759 21:10:16 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:22.759 21:10:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:22.759 21:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.759 21:10:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.759 21:10:16 -- accel/accel.sh@137 -- # build_accel_config 00:09:22.759 21:10:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:22.759 21:10:16 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:22.759 21:10:16 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:22.759 21:10:16 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:22.759 21:10:16 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:22.759 21:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.759 21:10:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:22.759 21:10:16 -- accel/accel.sh@40 -- # local IFS=, 00:09:22.759 21:10:16 -- accel/accel.sh@41 -- # jq -r . 00:09:22.759 ************************************ 00:09:22.759 START TEST accel_dif_functional_tests 00:09:22.759 ************************************ 00:09:22.759 21:10:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:22.759 [2024-04-23 21:10:16.533666] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:09:22.759 [2024-04-23 21:10:16.533765] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280225 ] 00:09:22.759 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.759 [2024-04-23 21:10:16.649012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:22.759 [2024-04-23 21:10:16.741947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.759 [2024-04-23 21:10:16.742046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.759 [2024-04-23 21:10:16.742052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.759 [2024-04-23 21:10:16.746582] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:22.759 [2024-04-23 21:10:16.754550] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:30.896 00:09:30.896 00:09:30.896 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.896 http://cunit.sourceforge.net/ 00:09:30.896 00:09:30.896 00:09:30.896 Suite: accel_dif 00:09:30.896 Test: verify: DIF generated, GUARD check ...passed 00:09:30.896 Test: verify: DIF generated, APPTAG check ...passed 00:09:30.896 Test: verify: DIF generated, REFTAG check ...passed 00:09:30.896 Test: verify: DIF not generated, GUARD check ...[2024-04-23 21:10:23.673567] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.896 [2024-04-23 21:10:23.673616] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.673632] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673642] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673649] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673657] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673663] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.896 [2024-04-23 21:10:23.673672] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.896 [2024-04-23 21:10:23.673680] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.896 [2024-04-23 21:10:23.673702] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:30.896 [2024-04-23 21:10:23.673711] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:09:30.896 [2024-04-23 21:10:23.673738] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:30.896 passed 00:09:30.896 Test: verify: DIF not generated, APPTAG check ...[2024-04-23 21:10:23.673801] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.896 [2024-04-23 21:10:23.673813] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.673824] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673831] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673838] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673846] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.896 [2024-04-23 21:10:23.673854] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.673860] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.673868] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 [2024-04-23 21:10:23.673876] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:30.897 [2024-04-23 21:10:23.673885] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:09:30.897 [2024-04-23 21:10:23.673901] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:30.897 passed 00:09:30.897 Test: verify: DIF not generated, REFTAG check ...[2024-04-23 21:10:23.673936] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.897 [2024-04-23 21:10:23.673947] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.673953] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.673961] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.673967] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.673975] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.673981] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.673993] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.673999] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 [2024-04-23 21:10:23.674009] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:30.897 [2024-04-23 21:10:23.674018] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:09:30.897 [2024-04-23 21:10:23.674037] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:30.897 passed 00:09:30.897 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:30.897 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-23 21:10:23.674105] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.897 [2024-04-23 21:10:23.674114] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.674123] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674129] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674136] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674142] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674150] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.674156] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.674165] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 [2024-04-23 21:10:23.674173] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:30.897 [2024-04-23 21:10:23.674182] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:09:30.897 passed 00:09:30.897 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:30.897 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:30.897 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:30.897 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-23 21:10:23.674341] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.897 [2024-04-23 21:10:23.674352] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.674359] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674368] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674374] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674384] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674395] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.674402] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.674408] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 [2024-04-23 21:10:23.674416] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:09:30.897 [2024-04-23 21:10:23.674423] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.674431] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674437] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674445] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674451] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674458] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.674464] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.674474] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 [2024-04-23 21:10:23.674483] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:30.897 [2024-04-23 21:10:23.674494] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:09:30.897 [2024-04-23 21:10:23.674503] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:09:30.897 passed 00:09:30.897 Test: generate copy: DIF generated, GUARD check ...[2024-04-23 21:10:23.674513] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-23 21:10:23.674520] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674529] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674535] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674543] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:09:30.897 [2024-04-23 21:10:23.674550] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:09:30.897 [2024-04-23 21:10:23.674558] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:09:30.897 [2024-04-23 21:10:23.674564] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:09:30.897 passed 00:09:30.897 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:30.897 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:30.897 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-23 21:10:23.674708] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:09:30.897 passed 00:09:30.897 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-23 21:10:23.674746] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:09:30.897 passed 00:09:30.897 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-23 21:10:23.674784] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:09:30.897 passed 00:09:30.897 Test: generate copy: iovecs-len validate ...[2024-04-23 21:10:23.674818] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:09:30.897 passed 00:09:30.897 Test: generate copy: buffer alignment validate ...passed 00:09:30.897 00:09:30.897 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.897 suites 1 1 n/a 0 0 00:09:30.897 tests 20 20 20 0 0 00:09:30.897 asserts 204 204 204 0 n/a 00:09:30.897 00:09:30.897 Elapsed time = 0.005 seconds 00:09:31.834 00:09:31.834 real 0m9.551s 00:09:31.834 user 0m20.060s 00:09:31.834 sys 0m0.287s 00:09:31.834 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:31.834 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:09:31.834 ************************************ 00:09:31.834 END TEST accel_dif_functional_tests 00:09:31.834 ************************************ 00:09:31.834 00:09:31.834 real 3m54.470s 00:09:31.834 user 2m31.318s 00:09:31.834 sys 0m8.094s 00:09:31.834 21:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:31.834 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:09:31.834 ************************************ 00:09:31.834 END TEST accel 00:09:31.834 ************************************ 00:09:31.834 21:10:26 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:31.834 21:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:31.834 21:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.834 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:09:32.096 ************************************ 00:09:32.096 START TEST accel_rpc 00:09:32.096 ************************************ 00:09:32.096 21:10:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:32.096 * Looking for test storage... 00:09:32.096 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:09:32.096 21:10:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:32.096 21:10:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1282140 00:09:32.096 21:10:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 1282140 00:09:32.096 21:10:26 -- common/autotest_common.sh@817 -- # '[' -z 1282140 ']' 00:09:32.096 21:10:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.096 21:10:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:32.096 21:10:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.096 21:10:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:32.096 21:10:26 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:32.096 21:10:26 -- common/autotest_common.sh@10 -- # set +x 00:09:32.096 [2024-04-23 21:10:26.326625] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:09:32.096 [2024-04-23 21:10:26.326742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282140 ] 00:09:32.357 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.357 [2024-04-23 21:10:26.442823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.357 [2024-04-23 21:10:26.540257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.926 21:10:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:32.926 21:10:27 -- common/autotest_common.sh@850 -- # return 0 00:09:32.926 21:10:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:32.926 21:10:27 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:09:32.926 21:10:27 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:09:32.926 21:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.926 21:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.926 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:32.926 ************************************ 00:09:32.926 START TEST accel_scan_dsa_modules 00:09:32.926 ************************************ 00:09:32.926 21:10:27 -- common/autotest_common.sh@1111 -- # accel_scan_dsa_modules_test_suite 00:09:32.926 21:10:27 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:09:32.926 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:32.926 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:32.926 [2024-04-23 21:10:27.176793] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:32.926 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:32.926 21:10:27 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:09:32.926 21:10:27 -- common/autotest_common.sh@638 -- # local es=0 00:09:32.926 21:10:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:09:32.926 21:10:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:32.926 21:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.926 21:10:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:32.926 21:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.926 21:10:27 -- common/autotest_common.sh@641 -- # rpc_cmd dsa_scan_accel_module 00:09:32.926 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:32.926 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:32.926 request: 00:09:32.926 { 00:09:32.926 "method": "dsa_scan_accel_module", 00:09:32.926 "req_id": 1 00:09:32.926 } 00:09:32.926 Got JSON-RPC error response 00:09:32.926 response: 00:09:32.926 { 00:09:32.926 "code": -114, 00:09:32.926 "message": "Operation already in progress" 00:09:32.926 } 00:09:32.926 21:10:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:32.926 21:10:27 -- common/autotest_common.sh@641 -- # es=1 00:09:32.926 21:10:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:32.926 21:10:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:32.926 21:10:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:32.926 00:09:32.926 real 0m0.022s 00:09:32.926 user 0m0.003s 00:09:32.926 sys 0m0.002s 00:09:32.926 21:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.926 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:32.926 ************************************ 00:09:32.926 END TEST accel_scan_dsa_modules 00:09:32.926 ************************************ 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:09:33.187 21:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.187 21:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 ************************************ 00:09:33.187 START TEST accel_scan_iaa_modules 00:09:33.187 ************************************ 00:09:33.187 21:10:27 -- common/autotest_common.sh@1111 -- # accel_scan_iaa_modules_test_suite 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:09:33.187 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 [2024-04-23 21:10:27.304815] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:33.187 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:09:33.187 21:10:27 -- common/autotest_common.sh@638 -- # local es=0 00:09:33.187 21:10:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:09:33.187 21:10:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:33.187 21:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:33.187 21:10:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:33.187 21:10:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:33.187 21:10:27 -- common/autotest_common.sh@641 -- # rpc_cmd iaa_scan_accel_module 00:09:33.187 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 request: 00:09:33.187 { 00:09:33.187 "method": "iaa_scan_accel_module", 00:09:33.187 "req_id": 1 00:09:33.187 } 00:09:33.187 Got JSON-RPC error response 00:09:33.187 response: 00:09:33.187 { 00:09:33.187 "code": -114, 00:09:33.187 "message": "Operation already in progress" 00:09:33.187 } 00:09:33.187 21:10:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:33.187 21:10:27 -- common/autotest_common.sh@641 -- # es=1 00:09:33.187 21:10:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:33.187 21:10:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:33.187 21:10:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:33.187 00:09:33.187 real 0m0.021s 00:09:33.187 user 0m0.006s 00:09:33.187 sys 0m0.000s 00:09:33.187 21:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 ************************************ 00:09:33.187 END TEST accel_scan_iaa_modules 00:09:33.187 ************************************ 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:33.187 21:10:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.187 21:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 ************************************ 00:09:33.187 START TEST accel_assign_opcode 00:09:33.187 ************************************ 00:09:33.187 21:10:27 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:09:33.187 21:10:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:33.187 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.187 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.187 [2024-04-23 21:10:27.456854] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:33.449 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.449 21:10:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:33.449 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.449 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:33.449 [2024-04-23 21:10:27.464827] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:33.449 21:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.449 21:10:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:33.449 21:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.449 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:09:41.589 21:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.589 21:10:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:41.589 21:10:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:41.589 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:09:41.590 21:10:34 -- accel/accel_rpc.sh@42 -- # grep software 00:09:41.590 21:10:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:41.590 21:10:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:41.590 software 00:09:41.590 00:09:41.590 real 0m7.218s 00:09:41.590 user 0m0.034s 00:09:41.590 sys 0m0.009s 00:09:41.590 21:10:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:41.590 21:10:34 -- common/autotest_common.sh@10 -- # set +x 00:09:41.590 ************************************ 00:09:41.590 END TEST accel_assign_opcode 00:09:41.590 ************************************ 00:09:41.590 21:10:34 -- accel/accel_rpc.sh@55 -- # killprocess 1282140 00:09:41.590 21:10:34 -- common/autotest_common.sh@936 -- # '[' -z 1282140 ']' 00:09:41.590 21:10:34 -- common/autotest_common.sh@940 -- # kill -0 1282140 00:09:41.590 21:10:34 -- common/autotest_common.sh@941 -- # uname 00:09:41.590 21:10:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:41.590 21:10:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1282140 00:09:41.590 21:10:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:41.590 21:10:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:41.590 21:10:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1282140' 00:09:41.590 killing process with pid 1282140 00:09:41.590 21:10:34 -- common/autotest_common.sh@955 -- # kill 1282140 00:09:41.590 21:10:34 -- common/autotest_common.sh@960 -- # wait 1282140 00:09:43.503 00:09:43.503 real 0m11.335s 00:09:43.503 user 0m4.432s 00:09:43.503 sys 0m0.772s 00:09:43.503 21:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:43.503 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.503 ************************************ 00:09:43.503 END TEST accel_rpc 00:09:43.503 ************************************ 00:09:43.503 21:10:37 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:09:43.504 21:10:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:43.504 21:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.504 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 ************************************ 00:09:43.504 START TEST app_cmdline 00:09:43.504 ************************************ 00:09:43.504 21:10:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:09:43.504 * Looking for test storage... 00:09:43.504 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:09:43.504 21:10:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:43.504 21:10:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1284516 00:09:43.504 21:10:37 -- app/cmdline.sh@18 -- # waitforlisten 1284516 00:09:43.504 21:10:37 -- common/autotest_common.sh@817 -- # '[' -z 1284516 ']' 00:09:43.504 21:10:37 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:43.504 21:10:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.504 21:10:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:43.504 21:10:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.504 21:10:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:43.504 21:10:37 -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [2024-04-23 21:10:37.770331] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:09:43.504 [2024-04-23 21:10:37.770443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284516 ] 00:09:43.766 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.766 [2024-04-23 21:10:37.890445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.766 [2024-04-23 21:10:37.985505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.335 21:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:44.335 21:10:38 -- common/autotest_common.sh@850 -- # return 0 00:09:44.335 21:10:38 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:44.595 { 00:09:44.595 "version": "SPDK v24.05-pre git sha1 3f2c8979187", 00:09:44.595 "fields": { 00:09:44.595 "major": 24, 00:09:44.595 "minor": 5, 00:09:44.595 "patch": 0, 00:09:44.595 "suffix": "-pre", 00:09:44.595 "commit": "3f2c8979187" 00:09:44.595 } 00:09:44.595 } 00:09:44.595 21:10:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:44.595 21:10:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:44.595 21:10:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:44.595 21:10:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:44.595 21:10:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:44.595 21:10:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:44.595 21:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.595 21:10:38 -- app/cmdline.sh@26 -- # sort 00:09:44.595 21:10:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.595 21:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.595 21:10:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:44.595 21:10:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:44.595 21:10:38 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.595 21:10:38 -- common/autotest_common.sh@638 -- # local es=0 00:09:44.595 21:10:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.595 21:10:38 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:44.595 21:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.595 21:10:38 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:44.595 21:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.595 21:10:38 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:44.595 21:10:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.595 21:10:38 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:09:44.595 21:10:38 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:09:44.595 21:10:38 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.595 request: 00:09:44.595 { 00:09:44.595 "method": "env_dpdk_get_mem_stats", 00:09:44.595 "req_id": 1 00:09:44.595 } 00:09:44.595 Got JSON-RPC error response 00:09:44.595 response: 00:09:44.595 { 00:09:44.595 "code": -32601, 00:09:44.595 "message": "Method not found" 00:09:44.595 } 00:09:44.595 21:10:38 -- common/autotest_common.sh@641 -- # es=1 00:09:44.595 21:10:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:44.595 21:10:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:44.595 21:10:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:44.595 21:10:38 -- app/cmdline.sh@1 -- # killprocess 1284516 00:09:44.595 21:10:38 -- common/autotest_common.sh@936 -- # '[' -z 1284516 ']' 00:09:44.595 21:10:38 -- common/autotest_common.sh@940 -- # kill -0 1284516 00:09:44.595 21:10:38 -- common/autotest_common.sh@941 -- # uname 00:09:44.595 21:10:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.595 21:10:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1284516 00:09:44.856 21:10:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:44.856 21:10:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:44.856 21:10:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1284516' 00:09:44.856 killing process with pid 1284516 00:09:44.856 21:10:38 -- common/autotest_common.sh@955 -- # kill 1284516 00:09:44.856 21:10:38 -- common/autotest_common.sh@960 -- # wait 1284516 00:09:45.846 00:09:45.846 real 0m2.099s 00:09:45.846 user 0m2.291s 00:09:45.846 sys 0m0.474s 00:09:45.846 21:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:45.846 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.846 ************************************ 00:09:45.846 END TEST app_cmdline 00:09:45.846 ************************************ 00:09:45.846 21:10:39 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:09:45.846 21:10:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.846 21:10:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.846 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.846 ************************************ 00:09:45.846 START TEST version 00:09:45.846 ************************************ 00:09:45.846 21:10:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:09:45.846 * Looking for test storage... 00:09:45.846 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:09:45.846 21:10:39 -- app/version.sh@17 -- # get_header_version major 00:09:45.846 21:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:45.846 21:10:39 -- app/version.sh@14 -- # cut -f2 00:09:45.846 21:10:39 -- app/version.sh@14 -- # tr -d '"' 00:09:45.846 21:10:39 -- app/version.sh@17 -- # major=24 00:09:45.846 21:10:39 -- app/version.sh@18 -- # get_header_version minor 00:09:45.846 21:10:39 -- app/version.sh@14 -- # tr -d '"' 00:09:45.846 21:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:45.846 21:10:39 -- app/version.sh@14 -- # cut -f2 00:09:45.846 21:10:39 -- app/version.sh@18 -- # minor=5 00:09:45.846 21:10:39 -- app/version.sh@19 -- # get_header_version patch 00:09:45.846 21:10:39 -- app/version.sh@14 -- # cut -f2 00:09:45.846 21:10:39 -- app/version.sh@14 -- # tr -d '"' 00:09:45.846 21:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:45.846 21:10:39 -- app/version.sh@19 -- # patch=0 00:09:45.846 21:10:39 -- app/version.sh@20 -- # get_header_version suffix 00:09:45.846 21:10:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:09:45.846 21:10:39 -- app/version.sh@14 -- # tr -d '"' 00:09:45.846 21:10:39 -- app/version.sh@14 -- # cut -f2 00:09:45.846 21:10:39 -- app/version.sh@20 -- # suffix=-pre 00:09:45.846 21:10:39 -- app/version.sh@22 -- # version=24.5 00:09:45.846 21:10:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:45.846 21:10:39 -- app/version.sh@28 -- # version=24.5rc0 00:09:45.846 21:10:39 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:09:45.846 21:10:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:45.846 21:10:39 -- app/version.sh@30 -- # py_version=24.5rc0 00:09:45.846 21:10:39 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:45.846 00:09:45.846 real 0m0.124s 00:09:45.846 user 0m0.058s 00:09:45.846 sys 0m0.090s 00:09:45.846 21:10:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:45.846 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.846 ************************************ 00:09:45.846 END TEST version 00:09:45.847 ************************************ 00:09:45.847 21:10:39 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:09:45.847 21:10:39 -- spdk/autotest.sh@194 -- # uname -s 00:09:45.847 21:10:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:45.847 21:10:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:45.847 21:10:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:45.847 21:10:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:45.847 21:10:39 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:09:45.847 21:10:39 -- spdk/autotest.sh@258 -- # timing_exit lib 00:09:45.847 21:10:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:45.847 21:10:39 -- common/autotest_common.sh@10 -- # set +x 00:09:45.847 21:10:40 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:09:45.847 21:10:40 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:09:45.847 21:10:40 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:09:45.847 21:10:40 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:09:45.847 21:10:40 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:09:45.847 21:10:40 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:09:45.847 21:10:40 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:45.847 21:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:45.847 21:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.847 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.177 ************************************ 00:09:46.177 START TEST nvmf_tcp 00:09:46.177 ************************************ 00:09:46.177 21:10:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:46.177 * Looking for test storage... 00:09:46.177 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:09:46.177 21:10:40 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:46.177 21:10:40 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:46.177 21:10:40 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.177 21:10:40 -- nvmf/common.sh@7 -- # uname -s 00:09:46.177 21:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.177 21:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.177 21:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.177 21:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.177 21:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.177 21:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.178 21:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.178 21:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.178 21:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.178 21:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.178 21:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:46.178 21:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:46.178 21:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.178 21:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.178 21:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:46.178 21:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.178 21:10:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:46.178 21:10:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.178 21:10:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.178 21:10:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.178 21:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@5 -- # export PATH 00:09:46.178 21:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- nvmf/common.sh@47 -- # : 0 00:09:46.178 21:10:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.178 21:10:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.178 21:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.178 21:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.178 21:10:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.178 21:10:40 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:46.178 21:10:40 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:46.178 21:10:40 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:46.178 21:10:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:46.178 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.178 21:10:40 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:46.178 21:10:40 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:46.178 21:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.178 21:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.178 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.178 ************************************ 00:09:46.178 START TEST nvmf_example 00:09:46.178 ************************************ 00:09:46.178 21:10:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:46.178 * Looking for test storage... 00:09:46.178 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:09:46.178 21:10:40 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.178 21:10:40 -- nvmf/common.sh@7 -- # uname -s 00:09:46.178 21:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.178 21:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.178 21:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.178 21:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.178 21:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.178 21:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.178 21:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.178 21:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.178 21:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.178 21:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.178 21:10:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:46.178 21:10:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:09:46.178 21:10:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.178 21:10:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.178 21:10:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:46.178 21:10:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.178 21:10:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:09:46.178 21:10:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.178 21:10:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.178 21:10:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.178 21:10:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- paths/export.sh@5 -- # export PATH 00:09:46.178 21:10:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.178 21:10:40 -- nvmf/common.sh@47 -- # : 0 00:09:46.178 21:10:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.178 21:10:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.178 21:10:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.178 21:10:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.178 21:10:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.178 21:10:40 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:46.178 21:10:40 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:46.178 21:10:40 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:46.178 21:10:40 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:46.178 21:10:40 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:46.178 21:10:40 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:46.178 21:10:40 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:46.178 21:10:40 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:46.178 21:10:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:46.178 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:46.178 21:10:40 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:46.178 21:10:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:46.178 21:10:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.178 21:10:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:46.178 21:10:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:46.178 21:10:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:46.178 21:10:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.178 21:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.178 21:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.178 21:10:40 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:09:46.178 21:10:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:46.178 21:10:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.178 21:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:52.759 21:10:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:52.759 21:10:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.760 21:10:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.760 21:10:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.760 21:10:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.760 21:10:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.760 21:10:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.760 21:10:46 -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.760 21:10:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.760 21:10:46 -- nvmf/common.sh@296 -- # e810=() 00:09:52.760 21:10:46 -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.760 21:10:46 -- nvmf/common.sh@297 -- # x722=() 00:09:52.760 21:10:46 -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.760 21:10:46 -- nvmf/common.sh@298 -- # mlx=() 00:09:52.760 21:10:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.760 21:10:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.760 21:10:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.760 21:10:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.760 21:10:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:09:52.760 Found 0000:27:00.0 (0x8086 - 0x159b) 00:09:52.760 21:10:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.760 21:10:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:09:52.760 Found 0000:27:00.1 (0x8086 - 0x159b) 00:09:52.760 21:10:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.760 21:10:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.760 21:10:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.760 21:10:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:09:52.760 Found net devices under 0000:27:00.0: cvl_0_0 00:09:52.760 21:10:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.760 21:10:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.760 21:10:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.760 21:10:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.760 21:10:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:09:52.760 Found net devices under 0000:27:00.1: cvl_0_1 00:09:52.760 21:10:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.760 21:10:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:52.760 21:10:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:52.760 21:10:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.760 21:10:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.760 21:10:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.760 21:10:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.760 21:10:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.760 21:10:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.760 21:10:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.760 21:10:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.760 21:10:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.760 21:10:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.760 21:10:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.760 21:10:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.760 21:10:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.760 21:10:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.760 21:10:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.760 21:10:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.760 21:10:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.760 21:10:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.760 21:10:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.760 21:10:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:09:52.760 00:09:52.760 --- 10.0.0.2 ping statistics --- 00:09:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.760 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:09:52.760 21:10:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:09:52.760 00:09:52.760 --- 10.0.0.1 ping statistics --- 00:09:52.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.760 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:52.760 21:10:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.760 21:10:46 -- nvmf/common.sh@411 -- # return 0 00:09:52.760 21:10:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:52.760 21:10:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.760 21:10:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:52.760 21:10:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.760 21:10:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:52.760 21:10:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:52.760 21:10:46 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:52.760 21:10:46 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:52.760 21:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:52.760 21:10:46 -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 21:10:46 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:52.760 21:10:46 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:52.760 21:10:46 -- target/nvmf_example.sh@34 -- # nvmfpid=1288753 00:09:52.760 21:10:46 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.760 21:10:46 -- target/nvmf_example.sh@36 -- # waitforlisten 1288753 00:09:52.760 21:10:46 -- common/autotest_common.sh@817 -- # '[' -z 1288753 ']' 00:09:52.760 21:10:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.760 21:10:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:52.760 21:10:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.760 21:10:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:52.760 21:10:46 -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 21:10:46 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:52.760 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.333 21:10:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:53.333 21:10:47 -- common/autotest_common.sh@850 -- # return 0 00:09:53.333 21:10:47 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:53.333 21:10:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:53.333 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.333 21:10:47 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.333 21:10:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.333 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.595 21:10:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.595 21:10:47 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:53.595 21:10:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.595 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.595 21:10:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.595 21:10:47 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:53.595 21:10:47 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.595 21:10:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.595 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.595 21:10:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.595 21:10:47 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:53.595 21:10:47 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.595 21:10:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.595 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.595 21:10:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.595 21:10:47 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.595 21:10:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.595 21:10:47 -- common/autotest_common.sh@10 -- # set +x 00:09:53.595 21:10:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.595 21:10:47 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:53.595 21:10:47 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:53.595 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.816 Initializing NVMe Controllers 00:10:05.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.816 Initialization complete. Launching workers. 00:10:05.816 ======================================================== 00:10:05.816 Latency(us) 00:10:05.816 Device Information : IOPS MiB/s Average min max 00:10:05.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18128.60 70.81 3531.09 713.01 16467.96 00:10:05.816 ======================================================== 00:10:05.816 Total : 18128.60 70.81 3531.09 713.01 16467.96 00:10:05.816 00:10:05.816 21:10:57 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:05.816 21:10:57 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:05.816 21:10:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:05.816 21:10:57 -- nvmf/common.sh@117 -- # sync 00:10:05.816 21:10:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.816 21:10:57 -- nvmf/common.sh@120 -- # set +e 00:10:05.816 21:10:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.816 21:10:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.816 rmmod nvme_tcp 00:10:05.816 rmmod nvme_fabrics 00:10:05.816 rmmod nvme_keyring 00:10:05.816 21:10:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.816 21:10:58 -- nvmf/common.sh@124 -- # set -e 00:10:05.816 21:10:58 -- nvmf/common.sh@125 -- # return 0 00:10:05.816 21:10:58 -- nvmf/common.sh@478 -- # '[' -n 1288753 ']' 00:10:05.816 21:10:58 -- nvmf/common.sh@479 -- # killprocess 1288753 00:10:05.816 21:10:58 -- common/autotest_common.sh@936 -- # '[' -z 1288753 ']' 00:10:05.816 21:10:58 -- common/autotest_common.sh@940 -- # kill -0 1288753 00:10:05.816 21:10:58 -- common/autotest_common.sh@941 -- # uname 00:10:05.816 21:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.816 21:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1288753 00:10:05.816 21:10:58 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:10:05.816 21:10:58 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:10:05.816 21:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1288753' 00:10:05.816 killing process with pid 1288753 00:10:05.816 21:10:58 -- common/autotest_common.sh@955 -- # kill 1288753 00:10:05.816 21:10:58 -- common/autotest_common.sh@960 -- # wait 1288753 00:10:05.816 nvmf threads initialize successfully 00:10:05.816 bdev subsystem init successfully 00:10:05.816 created a nvmf target service 00:10:05.816 create targets's poll groups done 00:10:05.816 all subsystems of target started 00:10:05.816 nvmf target is running 00:10:05.816 all subsystems of target stopped 00:10:05.816 destroy targets's poll groups done 00:10:05.816 destroyed the nvmf target service 00:10:05.816 bdev subsystem finish successfully 00:10:05.816 nvmf threads destroy successfully 00:10:05.816 21:10:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:05.816 21:10:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:05.816 21:10:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:05.816 21:10:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.816 21:10:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.816 21:10:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.816 21:10:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.816 21:10:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.390 21:11:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.390 21:11:00 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:06.390 21:11:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:06.390 21:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.390 00:10:06.390 real 0m20.297s 00:10:06.390 user 0m46.215s 00:10:06.390 sys 0m5.982s 00:10:06.390 21:11:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:06.390 21:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.390 ************************************ 00:10:06.390 END TEST nvmf_example 00:10:06.390 ************************************ 00:10:06.390 21:11:00 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.390 21:11:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:06.390 21:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.390 21:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 ************************************ 00:10:06.650 START TEST nvmf_filesystem 00:10:06.650 ************************************ 00:10:06.650 21:11:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.650 * Looking for test storage... 00:10:06.650 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.650 21:11:00 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:10:06.650 21:11:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:06.650 21:11:00 -- common/autotest_common.sh@34 -- # set -e 00:10:06.650 21:11:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:06.650 21:11:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:06.650 21:11:00 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:10:06.650 21:11:00 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:06.650 21:11:00 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:10:06.650 21:11:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:10:06.650 21:11:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:06.650 21:11:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:06.650 21:11:00 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:06.650 21:11:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:06.650 21:11:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:06.650 21:11:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:06.650 21:11:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:06.650 21:11:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:06.650 21:11:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:06.650 21:11:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:06.650 21:11:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:06.650 21:11:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:06.650 21:11:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:06.650 21:11:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:06.650 21:11:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:06.650 21:11:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:10:06.650 21:11:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:06.650 21:11:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:06.650 21:11:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:06.650 21:11:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:06.650 21:11:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:06.650 21:11:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:06.650 21:11:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:06.650 21:11:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:06.650 21:11:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:06.650 21:11:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:06.650 21:11:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:06.650 21:11:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:06.650 21:11:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:06.650 21:11:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:06.650 21:11:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:06.650 21:11:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:10:06.650 21:11:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:06.650 21:11:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:06.650 21:11:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:06.650 21:11:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:06.650 21:11:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:06.650 21:11:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:06.650 21:11:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:06.650 21:11:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:06.650 21:11:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:06.650 21:11:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:06.650 21:11:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:06.650 21:11:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:06.650 21:11:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:10:06.650 21:11:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:06.650 21:11:00 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:10:06.650 21:11:00 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:10:06.650 21:11:00 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:10:06.650 21:11:00 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:10:06.650 21:11:00 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:10:06.650 21:11:00 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:10:06.650 21:11:00 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:10:06.650 21:11:00 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:10:06.650 21:11:00 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:10:06.650 21:11:00 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:10:06.650 21:11:00 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:10:06.650 21:11:00 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:10:06.650 21:11:00 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:10:06.650 21:11:00 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:10:06.650 21:11:00 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:06.650 21:11:00 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:10:06.650 21:11:00 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:10:06.651 21:11:00 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:10:06.651 21:11:00 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:10:06.651 21:11:00 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:10:06.651 21:11:00 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:10:06.651 21:11:00 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:10:06.651 21:11:00 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:10:06.651 21:11:00 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:10:06.651 21:11:00 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:10:06.651 21:11:00 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:10:06.651 21:11:00 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:06.651 21:11:00 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:10:06.651 21:11:00 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:10:06.651 21:11:00 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:10:06.651 21:11:00 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:10:06.651 21:11:00 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:10:06.651 21:11:00 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:10:06.651 21:11:00 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:10:06.651 21:11:00 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:06.651 21:11:00 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:10:06.651 21:11:00 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:06.651 21:11:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:06.651 21:11:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:06.651 21:11:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:06.651 21:11:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:06.651 21:11:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:06.651 21:11:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:06.651 21:11:00 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:10:06.651 21:11:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:06.651 #define SPDK_CONFIG_H 00:10:06.651 #define SPDK_CONFIG_APPS 1 00:10:06.651 #define SPDK_CONFIG_ARCH native 00:10:06.651 #define SPDK_CONFIG_ASAN 1 00:10:06.651 #undef SPDK_CONFIG_AVAHI 00:10:06.651 #undef SPDK_CONFIG_CET 00:10:06.651 #define SPDK_CONFIG_COVERAGE 1 00:10:06.651 #define SPDK_CONFIG_CROSS_PREFIX 00:10:06.651 #undef SPDK_CONFIG_CRYPTO 00:10:06.651 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:06.651 #undef SPDK_CONFIG_CUSTOMOCF 00:10:06.651 #undef SPDK_CONFIG_DAOS 00:10:06.651 #define SPDK_CONFIG_DAOS_DIR 00:10:06.651 #define SPDK_CONFIG_DEBUG 1 00:10:06.651 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:06.651 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:10:06.651 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:06.651 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:06.651 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:06.651 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:10:06.651 #define SPDK_CONFIG_EXAMPLES 1 00:10:06.651 #undef SPDK_CONFIG_FC 00:10:06.651 #define SPDK_CONFIG_FC_PATH 00:10:06.651 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:06.651 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:06.651 #undef SPDK_CONFIG_FUSE 00:10:06.651 #undef SPDK_CONFIG_FUZZER 00:10:06.651 #define SPDK_CONFIG_FUZZER_LIB 00:10:06.651 #undef SPDK_CONFIG_GOLANG 00:10:06.651 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:06.651 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:06.651 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:06.651 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:10:06.651 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:06.651 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:06.651 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:06.651 #define SPDK_CONFIG_IDXD 1 00:10:06.651 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:06.651 #undef SPDK_CONFIG_IPSEC_MB 00:10:06.651 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:06.651 #define SPDK_CONFIG_ISAL 1 00:10:06.651 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:06.651 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:06.651 #define SPDK_CONFIG_LIBDIR 00:10:06.651 #undef SPDK_CONFIG_LTO 00:10:06.651 #define SPDK_CONFIG_MAX_LCORES 00:10:06.651 #define SPDK_CONFIG_NVME_CUSE 1 00:10:06.651 #undef SPDK_CONFIG_OCF 00:10:06.651 #define SPDK_CONFIG_OCF_PATH 00:10:06.651 #define SPDK_CONFIG_OPENSSL_PATH 00:10:06.651 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:06.651 #define SPDK_CONFIG_PGO_DIR 00:10:06.651 #undef SPDK_CONFIG_PGO_USE 00:10:06.651 #define SPDK_CONFIG_PREFIX /usr/local 00:10:06.651 #undef SPDK_CONFIG_RAID5F 00:10:06.651 #undef SPDK_CONFIG_RBD 00:10:06.651 #define SPDK_CONFIG_RDMA 1 00:10:06.651 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:06.651 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:06.651 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:06.651 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:06.651 #define SPDK_CONFIG_SHARED 1 00:10:06.651 #undef SPDK_CONFIG_SMA 00:10:06.651 #define SPDK_CONFIG_TESTS 1 00:10:06.651 #undef SPDK_CONFIG_TSAN 00:10:06.651 #define SPDK_CONFIG_UBLK 1 00:10:06.651 #define SPDK_CONFIG_UBSAN 1 00:10:06.651 #undef SPDK_CONFIG_UNIT_TESTS 00:10:06.651 #undef SPDK_CONFIG_URING 00:10:06.651 #define SPDK_CONFIG_URING_PATH 00:10:06.651 #undef SPDK_CONFIG_URING_ZNS 00:10:06.651 #undef SPDK_CONFIG_USDT 00:10:06.651 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:06.651 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:06.651 #undef SPDK_CONFIG_VFIO_USER 00:10:06.651 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:06.651 #define SPDK_CONFIG_VHOST 1 00:10:06.651 #define SPDK_CONFIG_VIRTIO 1 00:10:06.651 #undef SPDK_CONFIG_VTUNE 00:10:06.651 #define SPDK_CONFIG_VTUNE_DIR 00:10:06.651 #define SPDK_CONFIG_WERROR 1 00:10:06.651 #define SPDK_CONFIG_WPDK_DIR 00:10:06.651 #undef SPDK_CONFIG_XNVME 00:10:06.651 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:06.651 21:11:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:06.651 21:11:00 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:06.651 21:11:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.651 21:11:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.651 21:11:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.651 21:11:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.651 21:11:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.651 21:11:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.651 21:11:00 -- paths/export.sh@5 -- # export PATH 00:10:06.651 21:11:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.651 21:11:00 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.651 21:11:00 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.651 21:11:00 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:10:06.651 21:11:00 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:10:06.651 21:11:00 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:06.651 21:11:00 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:10:06.651 21:11:00 -- pm/common@67 -- # TEST_TAG=N/A 00:10:06.651 21:11:00 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:10:06.651 21:11:00 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:10:06.651 21:11:00 -- pm/common@71 -- # uname -s 00:10:06.651 21:11:00 -- pm/common@71 -- # PM_OS=Linux 00:10:06.651 21:11:00 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:06.651 21:11:00 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:10:06.651 21:11:00 -- pm/common@76 -- # [[ Linux == Linux ]] 00:10:06.651 21:11:00 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:10:06.651 21:11:00 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:10:06.651 21:11:00 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:06.651 21:11:00 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:06.651 21:11:00 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:10:06.651 21:11:00 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:10:06.651 21:11:00 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:10:06.651 21:11:00 -- common/autotest_common.sh@57 -- # : 1 00:10:06.651 21:11:00 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:10:06.651 21:11:00 -- common/autotest_common.sh@61 -- # : 0 00:10:06.651 21:11:00 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:06.651 21:11:00 -- common/autotest_common.sh@63 -- # : 0 00:10:06.651 21:11:00 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:10:06.652 21:11:00 -- common/autotest_common.sh@65 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:06.652 21:11:00 -- common/autotest_common.sh@67 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:10:06.652 21:11:00 -- common/autotest_common.sh@69 -- # : 00:10:06.652 21:11:00 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:10:06.652 21:11:00 -- common/autotest_common.sh@71 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:10:06.652 21:11:00 -- common/autotest_common.sh@73 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:10:06.652 21:11:00 -- common/autotest_common.sh@75 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:10:06.652 21:11:00 -- common/autotest_common.sh@77 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:06.652 21:11:00 -- common/autotest_common.sh@79 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:10:06.652 21:11:00 -- common/autotest_common.sh@81 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:10:06.652 21:11:00 -- common/autotest_common.sh@83 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:10:06.652 21:11:00 -- common/autotest_common.sh@85 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:10:06.652 21:11:00 -- common/autotest_common.sh@87 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:10:06.652 21:11:00 -- common/autotest_common.sh@89 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:10:06.652 21:11:00 -- common/autotest_common.sh@91 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:10:06.652 21:11:00 -- common/autotest_common.sh@93 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:10:06.652 21:11:00 -- common/autotest_common.sh@95 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:06.652 21:11:00 -- common/autotest_common.sh@97 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:10:06.652 21:11:00 -- common/autotest_common.sh@99 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:10:06.652 21:11:00 -- common/autotest_common.sh@101 -- # : tcp 00:10:06.652 21:11:00 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:06.652 21:11:00 -- common/autotest_common.sh@103 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:10:06.652 21:11:00 -- common/autotest_common.sh@105 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:10:06.652 21:11:00 -- common/autotest_common.sh@107 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:10:06.652 21:11:00 -- common/autotest_common.sh@109 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:10:06.652 21:11:00 -- common/autotest_common.sh@111 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:10:06.652 21:11:00 -- common/autotest_common.sh@113 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:10:06.652 21:11:00 -- common/autotest_common.sh@115 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:10:06.652 21:11:00 -- common/autotest_common.sh@117 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:06.652 21:11:00 -- common/autotest_common.sh@119 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:10:06.652 21:11:00 -- common/autotest_common.sh@121 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:10:06.652 21:11:00 -- common/autotest_common.sh@123 -- # : 00:10:06.652 21:11:00 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:06.652 21:11:00 -- common/autotest_common.sh@125 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:10:06.652 21:11:00 -- common/autotest_common.sh@127 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:10:06.652 21:11:00 -- common/autotest_common.sh@129 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:10:06.652 21:11:00 -- common/autotest_common.sh@131 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:10:06.652 21:11:00 -- common/autotest_common.sh@133 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:10:06.652 21:11:00 -- common/autotest_common.sh@135 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:10:06.652 21:11:00 -- common/autotest_common.sh@137 -- # : 00:10:06.652 21:11:00 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:10:06.652 21:11:00 -- common/autotest_common.sh@139 -- # : true 00:10:06.652 21:11:00 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:10:06.652 21:11:00 -- common/autotest_common.sh@141 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:10:06.652 21:11:00 -- common/autotest_common.sh@143 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:10:06.652 21:11:00 -- common/autotest_common.sh@145 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:10:06.652 21:11:00 -- common/autotest_common.sh@147 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:10:06.652 21:11:00 -- common/autotest_common.sh@149 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:10:06.652 21:11:00 -- common/autotest_common.sh@151 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:10:06.652 21:11:00 -- common/autotest_common.sh@153 -- # : 00:10:06.652 21:11:00 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:10:06.652 21:11:00 -- common/autotest_common.sh@155 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:10:06.652 21:11:00 -- common/autotest_common.sh@157 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:10:06.652 21:11:00 -- common/autotest_common.sh@159 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:10:06.652 21:11:00 -- common/autotest_common.sh@161 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:10:06.652 21:11:00 -- common/autotest_common.sh@163 -- # : 1 00:10:06.652 21:11:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:10:06.652 21:11:00 -- common/autotest_common.sh@166 -- # : 00:10:06.652 21:11:00 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:10:06.652 21:11:00 -- common/autotest_common.sh@168 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:10:06.652 21:11:00 -- common/autotest_common.sh@170 -- # : 0 00:10:06.652 21:11:00 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:06.652 21:11:00 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.652 21:11:00 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.652 21:11:00 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.652 21:11:00 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:10:06.652 21:11:00 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:10:06.652 21:11:00 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:06.652 21:11:00 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:10:06.652 21:11:00 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.652 21:11:00 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.652 21:11:00 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.653 21:11:00 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.653 21:11:00 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:06.653 21:11:00 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:10:06.653 21:11:00 -- common/autotest_common.sh@199 -- # cat 00:10:06.653 21:11:00 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:10:06.653 21:11:00 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.653 21:11:00 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.653 21:11:00 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.653 21:11:00 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.653 21:11:00 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:10:06.653 21:11:00 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:10:06.653 21:11:00 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:06.653 21:11:00 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:06.653 21:11:00 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:06.653 21:11:00 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:06.653 21:11:00 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.653 21:11:00 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.653 21:11:00 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.653 21:11:00 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.653 21:11:00 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.653 21:11:00 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.653 21:11:00 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.653 21:11:00 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.653 21:11:00 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:10:06.653 21:11:00 -- common/autotest_common.sh@252 -- # export valgrind= 00:10:06.653 21:11:00 -- common/autotest_common.sh@252 -- # valgrind= 00:10:06.653 21:11:00 -- common/autotest_common.sh@258 -- # uname -s 00:10:06.653 21:11:00 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:10:06.653 21:11:00 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:10:06.653 21:11:00 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:10:06.653 21:11:00 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:10:06.653 21:11:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@268 -- # MAKE=make 00:10:06.653 21:11:00 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j128 00:10:06.653 21:11:00 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:10:06.653 21:11:00 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:10:06.653 21:11:00 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:10:06.653 21:11:00 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:10:06.653 21:11:00 -- common/autotest_common.sh@289 -- # for i in "$@" 00:10:06.653 21:11:00 -- common/autotest_common.sh@290 -- # case "$i" in 00:10:06.653 21:11:00 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:10:06.653 21:11:00 -- common/autotest_common.sh@307 -- # [[ -z 1291597 ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@307 -- # kill -0 1291597 00:10:06.653 21:11:00 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:06.653 21:11:00 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:10:06.653 21:11:00 -- common/autotest_common.sh@320 -- # local mount target_dir 00:10:06.653 21:11:00 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:10:06.653 21:11:00 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:10:06.653 21:11:00 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:10:06.653 21:11:00 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:10:06.653 21:11:00 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.Uz1PgP 00:10:06.653 21:11:00 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:06.653 21:11:00 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:10:06.653 21:11:00 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Uz1PgP/tests/target /tmp/spdk.Uz1PgP 00:10:06.653 21:11:00 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:10:06.653 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@316 -- # df -T 00:10:06.913 21:11:00 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=991178752 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=4293251072 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=121069469696 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129472499712 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=8403030016 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=64733634560 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64736247808 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=25884811264 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25894502400 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=9691136 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=66560 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=437248 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=64734871552 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64736251904 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=1380352 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # avails["$mount"]=12947243008 00:10:06.913 21:11:00 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12947247104 00:10:06.913 21:11:00 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:10:06.913 21:11:00 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:06.913 21:11:00 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:10:06.913 * Looking for test storage... 00:10:06.913 21:11:00 -- common/autotest_common.sh@357 -- # local target_space new_size 00:10:06.913 21:11:00 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:10:06.913 21:11:00 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:06.913 21:11:00 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.913 21:11:00 -- common/autotest_common.sh@361 -- # mount=/ 00:10:06.913 21:11:00 -- common/autotest_common.sh@363 -- # target_space=121069469696 00:10:06.913 21:11:00 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:10:06.913 21:11:00 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:10:06.913 21:11:00 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:10:06.913 21:11:00 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:10:06.913 21:11:00 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:10:06.913 21:11:00 -- common/autotest_common.sh@370 -- # new_size=10617622528 00:10:06.913 21:11:00 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:06.913 21:11:00 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.913 21:11:00 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.913 21:11:00 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.913 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:06.913 21:11:00 -- common/autotest_common.sh@378 -- # return 0 00:10:06.913 21:11:00 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:06.913 21:11:00 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:06.913 21:11:00 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:06.913 21:11:00 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:06.913 21:11:00 -- common/autotest_common.sh@1673 -- # true 00:10:06.913 21:11:00 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:06.913 21:11:00 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:06.913 21:11:00 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:06.913 21:11:00 -- common/autotest_common.sh@27 -- # exec 00:10:06.913 21:11:00 -- common/autotest_common.sh@29 -- # exec 00:10:06.913 21:11:00 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:06.913 21:11:00 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:06.913 21:11:00 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:06.913 21:11:00 -- common/autotest_common.sh@18 -- # set -x 00:10:06.913 21:11:00 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.913 21:11:00 -- nvmf/common.sh@7 -- # uname -s 00:10:06.913 21:11:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.913 21:11:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.913 21:11:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.913 21:11:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.913 21:11:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.913 21:11:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.913 21:11:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.913 21:11:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.913 21:11:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.913 21:11:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.913 21:11:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:06.913 21:11:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:06.913 21:11:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.913 21:11:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.913 21:11:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:06.914 21:11:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.914 21:11:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:06.914 21:11:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.914 21:11:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.914 21:11:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.914 21:11:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.914 21:11:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.914 21:11:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.914 21:11:00 -- paths/export.sh@5 -- # export PATH 00:10:06.914 21:11:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.914 21:11:00 -- nvmf/common.sh@47 -- # : 0 00:10:06.914 21:11:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.914 21:11:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.914 21:11:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.914 21:11:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.914 21:11:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.914 21:11:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.914 21:11:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.914 21:11:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.914 21:11:00 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:06.914 21:11:00 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:06.914 21:11:00 -- target/filesystem.sh@15 -- # nvmftestinit 00:10:06.914 21:11:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:06.914 21:11:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.914 21:11:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:06.914 21:11:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:06.914 21:11:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:06.914 21:11:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.914 21:11:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.914 21:11:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.914 21:11:00 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:10:06.914 21:11:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:06.914 21:11:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.914 21:11:00 -- common/autotest_common.sh@10 -- # set +x 00:10:12.193 21:11:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:12.193 21:11:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.193 21:11:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.193 21:11:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.193 21:11:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.193 21:11:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.193 21:11:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.193 21:11:06 -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.193 21:11:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.193 21:11:06 -- nvmf/common.sh@296 -- # e810=() 00:10:12.193 21:11:06 -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.193 21:11:06 -- nvmf/common.sh@297 -- # x722=() 00:10:12.193 21:11:06 -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.193 21:11:06 -- nvmf/common.sh@298 -- # mlx=() 00:10:12.193 21:11:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.193 21:11:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.193 21:11:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.193 21:11:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.193 21:11:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.193 21:11:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:12.193 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:12.193 21:11:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.193 21:11:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:12.193 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:12.193 21:11:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.193 21:11:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.451 21:11:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:12.451 21:11:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.451 21:11:06 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:12.451 21:11:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.451 21:11:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.451 21:11:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:12.451 21:11:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.451 21:11:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:12.451 Found net devices under 0000:27:00.0: cvl_0_0 00:10:12.451 21:11:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.451 21:11:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.451 21:11:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.451 21:11:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:12.452 21:11:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.452 21:11:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:12.452 Found net devices under 0000:27:00.1: cvl_0_1 00:10:12.452 21:11:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.452 21:11:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:12.452 21:11:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:12.452 21:11:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:12.452 21:11:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:12.452 21:11:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:12.452 21:11:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.452 21:11:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.452 21:11:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.452 21:11:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:12.452 21:11:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.452 21:11:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.452 21:11:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:12.452 21:11:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.452 21:11:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.452 21:11:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:12.452 21:11:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:12.452 21:11:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.452 21:11:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.452 21:11:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.452 21:11:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.452 21:11:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:12.452 21:11:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.452 21:11:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.452 21:11:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.712 21:11:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:12.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:10:12.712 00:10:12.712 --- 10.0.0.2 ping statistics --- 00:10:12.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.712 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:12.712 21:11:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:10:12.712 00:10:12.712 --- 10.0.0.1 ping statistics --- 00:10:12.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.712 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:10:12.712 21:11:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.712 21:11:06 -- nvmf/common.sh@411 -- # return 0 00:10:12.712 21:11:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:12.712 21:11:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.712 21:11:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:12.712 21:11:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:12.712 21:11:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.712 21:11:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:12.712 21:11:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:12.712 21:11:06 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.712 21:11:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:12.712 21:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.712 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:10:12.712 ************************************ 00:10:12.712 START TEST nvmf_filesystem_no_in_capsule 00:10:12.712 ************************************ 00:10:12.712 21:11:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:10:12.712 21:11:06 -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.712 21:11:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.712 21:11:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:12.712 21:11:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:12.712 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:10:12.712 21:11:06 -- nvmf/common.sh@470 -- # nvmfpid=1295253 00:10:12.712 21:11:06 -- nvmf/common.sh@471 -- # waitforlisten 1295253 00:10:12.712 21:11:06 -- common/autotest_common.sh@817 -- # '[' -z 1295253 ']' 00:10:12.712 21:11:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.712 21:11:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.712 21:11:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.712 21:11:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.712 21:11:06 -- common/autotest_common.sh@10 -- # set +x 00:10:12.712 21:11:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.712 [2024-04-23 21:11:06.945747] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:10:12.712 [2024-04-23 21:11:06.945848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.973 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.973 [2024-04-23 21:11:07.068551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.973 [2024-04-23 21:11:07.161919] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.973 [2024-04-23 21:11:07.161955] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.973 [2024-04-23 21:11:07.161967] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.973 [2024-04-23 21:11:07.161975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.973 [2024-04-23 21:11:07.161982] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.973 [2024-04-23 21:11:07.162061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.973 [2024-04-23 21:11:07.162159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.973 [2024-04-23 21:11:07.162259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.973 [2024-04-23 21:11:07.162270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.546 21:11:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.546 21:11:07 -- common/autotest_common.sh@850 -- # return 0 00:10:13.546 21:11:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:13.546 21:11:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:13.546 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 21:11:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.546 21:11:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.546 21:11:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:13.546 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.546 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.546 [2024-04-23 21:11:07.700704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.546 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.546 21:11:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.546 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.546 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.805 Malloc1 00:10:13.805 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.805 21:11:07 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.805 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.805 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.805 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.805 21:11:07 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.805 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.805 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.805 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.805 21:11:07 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.805 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.805 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.805 [2024-04-23 21:11:07.966639] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.805 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.805 21:11:07 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.805 21:11:07 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:10:13.805 21:11:07 -- common/autotest_common.sh@1365 -- # local bdev_info 00:10:13.805 21:11:07 -- common/autotest_common.sh@1366 -- # local bs 00:10:13.805 21:11:07 -- common/autotest_common.sh@1367 -- # local nb 00:10:13.805 21:11:07 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.805 21:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.805 21:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:13.805 21:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.805 21:11:07 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:10:13.805 { 00:10:13.805 "name": "Malloc1", 00:10:13.805 "aliases": [ 00:10:13.805 "851f6795-4d8f-4a5e-963a-9eacebacd2a6" 00:10:13.805 ], 00:10:13.805 "product_name": "Malloc disk", 00:10:13.805 "block_size": 512, 00:10:13.805 "num_blocks": 1048576, 00:10:13.805 "uuid": "851f6795-4d8f-4a5e-963a-9eacebacd2a6", 00:10:13.805 "assigned_rate_limits": { 00:10:13.805 "rw_ios_per_sec": 0, 00:10:13.805 "rw_mbytes_per_sec": 0, 00:10:13.805 "r_mbytes_per_sec": 0, 00:10:13.805 "w_mbytes_per_sec": 0 00:10:13.805 }, 00:10:13.805 "claimed": true, 00:10:13.805 "claim_type": "exclusive_write", 00:10:13.805 "zoned": false, 00:10:13.805 "supported_io_types": { 00:10:13.805 "read": true, 00:10:13.805 "write": true, 00:10:13.805 "unmap": true, 00:10:13.805 "write_zeroes": true, 00:10:13.805 "flush": true, 00:10:13.805 "reset": true, 00:10:13.805 "compare": false, 00:10:13.805 "compare_and_write": false, 00:10:13.805 "abort": true, 00:10:13.805 "nvme_admin": false, 00:10:13.805 "nvme_io": false 00:10:13.805 }, 00:10:13.805 "memory_domains": [ 00:10:13.805 { 00:10:13.805 "dma_device_id": "system", 00:10:13.805 "dma_device_type": 1 00:10:13.805 }, 00:10:13.805 { 00:10:13.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.805 "dma_device_type": 2 00:10:13.805 } 00:10:13.805 ], 00:10:13.805 "driver_specific": {} 00:10:13.805 } 00:10:13.805 ]' 00:10:13.805 21:11:07 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:10:13.805 21:11:08 -- common/autotest_common.sh@1369 -- # bs=512 00:10:13.805 21:11:08 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:10:13.805 21:11:08 -- common/autotest_common.sh@1370 -- # nb=1048576 00:10:13.805 21:11:08 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:10:13.805 21:11:08 -- common/autotest_common.sh@1374 -- # echo 512 00:10:13.805 21:11:08 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.805 21:11:08 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.713 21:11:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.713 21:11:09 -- common/autotest_common.sh@1184 -- # local i=0 00:10:15.713 21:11:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.713 21:11:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:15.713 21:11:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:17.625 21:11:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:17.625 21:11:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:17.625 21:11:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.625 21:11:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:17.625 21:11:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.625 21:11:11 -- common/autotest_common.sh@1194 -- # return 0 00:10:17.625 21:11:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:17.625 21:11:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:17.625 21:11:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:17.625 21:11:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:17.625 21:11:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:17.625 21:11:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:17.625 21:11:11 -- setup/common.sh@80 -- # echo 536870912 00:10:17.625 21:11:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:17.625 21:11:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:17.625 21:11:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:17.625 21:11:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.625 21:11:11 -- target/filesystem.sh@69 -- # partprobe 00:10:17.887 21:11:11 -- target/filesystem.sh@70 -- # sleep 1 00:10:18.823 21:11:13 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:18.823 21:11:13 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:18.823 21:11:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:18.823 21:11:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.823 21:11:13 -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 ************************************ 00:10:19.084 START TEST filesystem_ext4 00:10:19.084 ************************************ 00:10:19.084 21:11:13 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:19.084 21:11:13 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:19.084 21:11:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.084 21:11:13 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:19.084 21:11:13 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:10:19.084 21:11:13 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:19.084 21:11:13 -- common/autotest_common.sh@914 -- # local i=0 00:10:19.084 21:11:13 -- common/autotest_common.sh@915 -- # local force 00:10:19.084 21:11:13 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:10:19.084 21:11:13 -- common/autotest_common.sh@918 -- # force=-F 00:10:19.084 21:11:13 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:19.084 mke2fs 1.46.5 (30-Dec-2021) 00:10:19.084 Discarding device blocks: 0/522240 done 00:10:19.084 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:19.084 Filesystem UUID: edcbe773-eebb-4d7c-a997-c15600b69c66 00:10:19.084 Superblock backups stored on blocks: 00:10:19.084 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:19.084 00:10:19.084 Allocating group tables: 0/64 done 00:10:19.084 Writing inode tables: 0/64 done 00:10:19.345 Creating journal (8192 blocks): done 00:10:20.173 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:10:20.173 00:10:20.173 21:11:14 -- common/autotest_common.sh@931 -- # return 0 00:10:20.173 21:11:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:20.435 21:11:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:20.435 21:11:14 -- target/filesystem.sh@25 -- # sync 00:10:20.435 21:11:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:20.435 21:11:14 -- target/filesystem.sh@27 -- # sync 00:10:20.435 21:11:14 -- target/filesystem.sh@29 -- # i=0 00:10:20.435 21:11:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:20.435 21:11:14 -- target/filesystem.sh@37 -- # kill -0 1295253 00:10:20.435 21:11:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:20.435 21:11:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:20.435 21:11:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:20.435 21:11:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:20.435 00:10:20.435 real 0m1.558s 00:10:20.435 user 0m0.022s 00:10:20.435 sys 0m0.038s 00:10:20.435 21:11:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:20.435 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 ************************************ 00:10:20.435 END TEST filesystem_ext4 00:10:20.435 ************************************ 00:10:20.435 21:11:14 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:20.435 21:11:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:20.435 21:11:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.435 21:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:20.697 ************************************ 00:10:20.697 START TEST filesystem_btrfs 00:10:20.697 ************************************ 00:10:20.697 21:11:14 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:20.697 21:11:14 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:20.697 21:11:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.697 21:11:14 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:20.697 21:11:14 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:10:20.697 21:11:14 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:20.697 21:11:14 -- common/autotest_common.sh@914 -- # local i=0 00:10:20.697 21:11:14 -- common/autotest_common.sh@915 -- # local force 00:10:20.697 21:11:14 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:10:20.697 21:11:14 -- common/autotest_common.sh@920 -- # force=-f 00:10:20.697 21:11:14 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:20.959 btrfs-progs v6.6.2 00:10:20.959 See https://btrfs.readthedocs.io for more information. 00:10:20.959 00:10:20.959 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:20.959 NOTE: several default settings have changed in version 5.15, please make sure 00:10:20.959 this does not affect your deployments: 00:10:20.959 - DUP for metadata (-m dup) 00:10:20.959 - enabled no-holes (-O no-holes) 00:10:20.959 - enabled free-space-tree (-R free-space-tree) 00:10:20.959 00:10:20.959 Label: (null) 00:10:20.959 UUID: cb048e23-96c0-40d7-a937-a78db322f15d 00:10:20.959 Node size: 16384 00:10:20.959 Sector size: 4096 00:10:20.959 Filesystem size: 510.00MiB 00:10:20.959 Block group profiles: 00:10:20.959 Data: single 8.00MiB 00:10:20.959 Metadata: DUP 32.00MiB 00:10:20.959 System: DUP 8.00MiB 00:10:20.959 SSD detected: yes 00:10:20.959 Zoned device: no 00:10:20.959 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:20.959 Runtime features: free-space-tree 00:10:20.959 Checksum: crc32c 00:10:20.959 Number of devices: 1 00:10:20.959 Devices: 00:10:20.959 ID SIZE PATH 00:10:20.959 1 510.00MiB /dev/nvme0n1p1 00:10:20.959 00:10:20.959 21:11:15 -- common/autotest_common.sh@931 -- # return 0 00:10:20.959 21:11:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.218 21:11:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.218 21:11:15 -- target/filesystem.sh@25 -- # sync 00:10:21.218 21:11:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.218 21:11:15 -- target/filesystem.sh@27 -- # sync 00:10:21.218 21:11:15 -- target/filesystem.sh@29 -- # i=0 00:10:21.218 21:11:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.218 21:11:15 -- target/filesystem.sh@37 -- # kill -0 1295253 00:10:21.218 21:11:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.219 21:11:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.219 21:11:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.219 21:11:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.219 00:10:21.219 real 0m0.561s 00:10:21.219 user 0m0.019s 00:10:21.219 sys 0m0.057s 00:10:21.219 21:11:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.219 21:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:21.219 ************************************ 00:10:21.219 END TEST filesystem_btrfs 00:10:21.219 ************************************ 00:10:21.219 21:11:15 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:21.219 21:11:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:21.219 21:11:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.219 21:11:15 -- common/autotest_common.sh@10 -- # set +x 00:10:21.219 ************************************ 00:10:21.219 START TEST filesystem_xfs 00:10:21.219 ************************************ 00:10:21.219 21:11:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:10:21.219 21:11:15 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:21.219 21:11:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.219 21:11:15 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:21.219 21:11:15 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:10:21.219 21:11:15 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:21.219 21:11:15 -- common/autotest_common.sh@914 -- # local i=0 00:10:21.219 21:11:15 -- common/autotest_common.sh@915 -- # local force 00:10:21.219 21:11:15 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:10:21.219 21:11:15 -- common/autotest_common.sh@920 -- # force=-f 00:10:21.219 21:11:15 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:21.477 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:21.477 = sectsz=512 attr=2, projid32bit=1 00:10:21.477 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:21.477 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:21.477 data = bsize=4096 blocks=130560, imaxpct=25 00:10:21.477 = sunit=0 swidth=0 blks 00:10:21.478 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:21.478 log =internal log bsize=4096 blocks=16384, version=2 00:10:21.478 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:21.478 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:22.421 Discarding blocks...Done. 00:10:22.421 21:11:16 -- common/autotest_common.sh@931 -- # return 0 00:10:22.421 21:11:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.961 21:11:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.961 21:11:18 -- target/filesystem.sh@25 -- # sync 00:10:24.961 21:11:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.961 21:11:18 -- target/filesystem.sh@27 -- # sync 00:10:24.961 21:11:18 -- target/filesystem.sh@29 -- # i=0 00:10:24.961 21:11:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.961 21:11:18 -- target/filesystem.sh@37 -- # kill -0 1295253 00:10:24.961 21:11:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.961 21:11:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.961 21:11:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.961 21:11:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.961 00:10:24.961 real 0m3.341s 00:10:24.961 user 0m0.029s 00:10:24.961 sys 0m0.035s 00:10:24.961 21:11:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:24.961 21:11:18 -- common/autotest_common.sh@10 -- # set +x 00:10:24.961 ************************************ 00:10:24.961 END TEST filesystem_xfs 00:10:24.961 ************************************ 00:10:24.961 21:11:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:24.961 21:11:18 -- target/filesystem.sh@93 -- # sync 00:10:24.961 21:11:18 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.961 21:11:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.961 21:11:19 -- common/autotest_common.sh@1205 -- # local i=0 00:10:24.961 21:11:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:24.961 21:11:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.961 21:11:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:24.961 21:11:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.961 21:11:19 -- common/autotest_common.sh@1217 -- # return 0 00:10:24.961 21:11:19 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.961 21:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.961 21:11:19 -- common/autotest_common.sh@10 -- # set +x 00:10:24.961 21:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.961 21:11:19 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:24.961 21:11:19 -- target/filesystem.sh@101 -- # killprocess 1295253 00:10:24.961 21:11:19 -- common/autotest_common.sh@936 -- # '[' -z 1295253 ']' 00:10:24.961 21:11:19 -- common/autotest_common.sh@940 -- # kill -0 1295253 00:10:24.961 21:11:19 -- common/autotest_common.sh@941 -- # uname 00:10:24.961 21:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:24.961 21:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1295253 00:10:24.961 21:11:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:24.961 21:11:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:24.961 21:11:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1295253' 00:10:24.961 killing process with pid 1295253 00:10:24.961 21:11:19 -- common/autotest_common.sh@955 -- # kill 1295253 00:10:24.961 21:11:19 -- common/autotest_common.sh@960 -- # wait 1295253 00:10:25.900 21:11:20 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:25.900 00:10:25.900 real 0m13.155s 00:10:25.900 user 0m50.871s 00:10:25.900 sys 0m1.137s 00:10:25.900 21:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:25.900 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:25.900 ************************************ 00:10:25.900 END TEST nvmf_filesystem_no_in_capsule 00:10:25.900 ************************************ 00:10:25.900 21:11:20 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:25.900 21:11:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:25.900 21:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.900 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:25.900 ************************************ 00:10:25.900 START TEST nvmf_filesystem_in_capsule 00:10:25.900 ************************************ 00:10:25.900 21:11:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:10:25.900 21:11:20 -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:25.900 21:11:20 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:25.900 21:11:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:25.900 21:11:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:25.900 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:25.900 21:11:20 -- nvmf/common.sh@470 -- # nvmfpid=1298144 00:10:25.900 21:11:20 -- nvmf/common.sh@471 -- # waitforlisten 1298144 00:10:25.900 21:11:20 -- common/autotest_common.sh@817 -- # '[' -z 1298144 ']' 00:10:25.900 21:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.900 21:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:25.900 21:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.900 21:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:25.900 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:25.900 21:11:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.160 [2024-04-23 21:11:20.215413] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:10:26.160 [2024-04-23 21:11:20.215513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.160 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.160 [2024-04-23 21:11:20.341029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.421 [2024-04-23 21:11:20.439665] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.421 [2024-04-23 21:11:20.439709] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.421 [2024-04-23 21:11:20.439721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.421 [2024-04-23 21:11:20.439730] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.421 [2024-04-23 21:11:20.439738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.421 [2024-04-23 21:11:20.439820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.421 [2024-04-23 21:11:20.439846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.421 [2024-04-23 21:11:20.439946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.421 [2024-04-23 21:11:20.439955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.682 21:11:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:26.682 21:11:20 -- common/autotest_common.sh@850 -- # return 0 00:10:26.682 21:11:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:26.682 21:11:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:26.682 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.682 21:11:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.682 21:11:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:26.682 21:11:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:26.682 21:11:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.682 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.943 [2024-04-23 21:11:20.962355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.943 21:11:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.944 21:11:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:26.944 21:11:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.944 21:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.944 Malloc1 00:10:26.944 21:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.944 21:11:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.944 21:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.944 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.203 21:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.203 21:11:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.203 21:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.203 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.203 21:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.203 21:11:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.203 21:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.203 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.203 [2024-04-23 21:11:21.233791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.203 21:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.203 21:11:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.203 21:11:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:10:27.203 21:11:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:10:27.203 21:11:21 -- common/autotest_common.sh@1366 -- # local bs 00:10:27.203 21:11:21 -- common/autotest_common.sh@1367 -- # local nb 00:10:27.203 21:11:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.203 21:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.203 21:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:27.203 21:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.203 21:11:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:10:27.203 { 00:10:27.203 "name": "Malloc1", 00:10:27.203 "aliases": [ 00:10:27.203 "3d2e4bdb-5311-43bd-a829-b65ddb4358d5" 00:10:27.203 ], 00:10:27.203 "product_name": "Malloc disk", 00:10:27.203 "block_size": 512, 00:10:27.203 "num_blocks": 1048576, 00:10:27.203 "uuid": "3d2e4bdb-5311-43bd-a829-b65ddb4358d5", 00:10:27.203 "assigned_rate_limits": { 00:10:27.203 "rw_ios_per_sec": 0, 00:10:27.203 "rw_mbytes_per_sec": 0, 00:10:27.203 "r_mbytes_per_sec": 0, 00:10:27.203 "w_mbytes_per_sec": 0 00:10:27.203 }, 00:10:27.203 "claimed": true, 00:10:27.203 "claim_type": "exclusive_write", 00:10:27.203 "zoned": false, 00:10:27.203 "supported_io_types": { 00:10:27.203 "read": true, 00:10:27.203 "write": true, 00:10:27.203 "unmap": true, 00:10:27.203 "write_zeroes": true, 00:10:27.203 "flush": true, 00:10:27.203 "reset": true, 00:10:27.203 "compare": false, 00:10:27.203 "compare_and_write": false, 00:10:27.203 "abort": true, 00:10:27.203 "nvme_admin": false, 00:10:27.203 "nvme_io": false 00:10:27.203 }, 00:10:27.203 "memory_domains": [ 00:10:27.203 { 00:10:27.203 "dma_device_id": "system", 00:10:27.203 "dma_device_type": 1 00:10:27.203 }, 00:10:27.203 { 00:10:27.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.203 "dma_device_type": 2 00:10:27.203 } 00:10:27.203 ], 00:10:27.203 "driver_specific": {} 00:10:27.203 } 00:10:27.203 ]' 00:10:27.203 21:11:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:10:27.203 21:11:21 -- common/autotest_common.sh@1369 -- # bs=512 00:10:27.203 21:11:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:10:27.203 21:11:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:10:27.203 21:11:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:10:27.203 21:11:21 -- common/autotest_common.sh@1374 -- # echo 512 00:10:27.203 21:11:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:27.203 21:11:21 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.582 21:11:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.582 21:11:22 -- common/autotest_common.sh@1184 -- # local i=0 00:10:28.582 21:11:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.582 21:11:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:28.582 21:11:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:30.490 21:11:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:30.490 21:11:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:30.490 21:11:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.490 21:11:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:30.490 21:11:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.490 21:11:24 -- common/autotest_common.sh@1194 -- # return 0 00:10:30.750 21:11:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:30.750 21:11:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:30.750 21:11:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:30.750 21:11:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:30.750 21:11:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:30.750 21:11:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:30.750 21:11:24 -- setup/common.sh@80 -- # echo 536870912 00:10:30.750 21:11:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:30.750 21:11:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:30.750 21:11:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:30.750 21:11:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:31.010 21:11:25 -- target/filesystem.sh@69 -- # partprobe 00:10:31.621 21:11:25 -- target/filesystem.sh@70 -- # sleep 1 00:10:32.556 21:11:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:32.556 21:11:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:32.556 21:11:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:32.556 21:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.556 21:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:32.817 ************************************ 00:10:32.817 START TEST filesystem_in_capsule_ext4 00:10:32.817 ************************************ 00:10:32.817 21:11:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:32.817 21:11:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:32.817 21:11:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.817 21:11:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:32.817 21:11:26 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:10:32.817 21:11:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:32.817 21:11:26 -- common/autotest_common.sh@914 -- # local i=0 00:10:32.817 21:11:26 -- common/autotest_common.sh@915 -- # local force 00:10:32.817 21:11:26 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:10:32.817 21:11:26 -- common/autotest_common.sh@918 -- # force=-F 00:10:32.817 21:11:26 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:32.817 mke2fs 1.46.5 (30-Dec-2021) 00:10:32.817 Discarding device blocks: 0/522240 done 00:10:32.817 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:32.817 Filesystem UUID: 4ce6c617-4efa-4f00-a005-2c63a0863473 00:10:32.817 Superblock backups stored on blocks: 00:10:32.817 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:32.817 00:10:32.817 Allocating group tables: 0/64 done 00:10:32.817 Writing inode tables: 0/64 done 00:10:34.723 Creating journal (8192 blocks): done 00:10:35.319 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:10:35.319 00:10:35.319 21:11:29 -- common/autotest_common.sh@931 -- # return 0 00:10:35.319 21:11:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.620 21:11:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.620 21:11:29 -- target/filesystem.sh@25 -- # sync 00:10:35.620 21:11:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.620 21:11:29 -- target/filesystem.sh@27 -- # sync 00:10:35.620 21:11:29 -- target/filesystem.sh@29 -- # i=0 00:10:35.620 21:11:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.620 21:11:29 -- target/filesystem.sh@37 -- # kill -0 1298144 00:10:35.620 21:11:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.620 21:11:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.620 21:11:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.620 21:11:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:35.620 00:10:35.620 real 0m2.868s 00:10:35.620 user 0m0.012s 00:10:35.620 sys 0m0.051s 00:10:35.620 21:11:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.620 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:10:35.620 ************************************ 00:10:35.620 END TEST filesystem_in_capsule_ext4 00:10:35.620 ************************************ 00:10:35.620 21:11:29 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:35.620 21:11:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:35.620 21:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.620 21:11:29 -- common/autotest_common.sh@10 -- # set +x 00:10:35.620 ************************************ 00:10:35.620 START TEST filesystem_in_capsule_btrfs 00:10:35.620 ************************************ 00:10:35.620 21:11:29 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:35.620 21:11:29 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:35.620 21:11:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.620 21:11:29 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:35.620 21:11:29 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:10:35.620 21:11:29 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:35.620 21:11:29 -- common/autotest_common.sh@914 -- # local i=0 00:10:35.620 21:11:29 -- common/autotest_common.sh@915 -- # local force 00:10:35.620 21:11:29 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:10:35.620 21:11:29 -- common/autotest_common.sh@920 -- # force=-f 00:10:35.620 21:11:29 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:35.881 btrfs-progs v6.6.2 00:10:35.881 See https://btrfs.readthedocs.io for more information. 00:10:35.881 00:10:35.881 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:35.881 NOTE: several default settings have changed in version 5.15, please make sure 00:10:35.881 this does not affect your deployments: 00:10:35.881 - DUP for metadata (-m dup) 00:10:35.881 - enabled no-holes (-O no-holes) 00:10:35.881 - enabled free-space-tree (-R free-space-tree) 00:10:35.881 00:10:35.881 Label: (null) 00:10:35.881 UUID: 6e8629b3-9f61-4c62-83ae-b3a391ef8e88 00:10:35.881 Node size: 16384 00:10:35.881 Sector size: 4096 00:10:35.881 Filesystem size: 510.00MiB 00:10:35.881 Block group profiles: 00:10:35.881 Data: single 8.00MiB 00:10:35.881 Metadata: DUP 32.00MiB 00:10:35.881 System: DUP 8.00MiB 00:10:35.881 SSD detected: yes 00:10:35.881 Zoned device: no 00:10:35.881 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:35.881 Runtime features: free-space-tree 00:10:35.881 Checksum: crc32c 00:10:35.881 Number of devices: 1 00:10:35.881 Devices: 00:10:35.881 ID SIZE PATH 00:10:35.881 1 510.00MiB /dev/nvme0n1p1 00:10:35.881 00:10:35.881 21:11:30 -- common/autotest_common.sh@931 -- # return 0 00:10:35.881 21:11:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.140 21:11:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.140 21:11:30 -- target/filesystem.sh@25 -- # sync 00:10:36.140 21:11:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.140 21:11:30 -- target/filesystem.sh@27 -- # sync 00:10:36.140 21:11:30 -- target/filesystem.sh@29 -- # i=0 00:10:36.140 21:11:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.140 21:11:30 -- target/filesystem.sh@37 -- # kill -0 1298144 00:10:36.140 21:11:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.140 21:11:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.140 21:11:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.140 21:11:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.140 00:10:36.140 real 0m0.512s 00:10:36.140 user 0m0.022s 00:10:36.140 sys 0m0.050s 00:10:36.140 21:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:36.140 21:11:30 -- common/autotest_common.sh@10 -- # set +x 00:10:36.140 ************************************ 00:10:36.140 END TEST filesystem_in_capsule_btrfs 00:10:36.140 ************************************ 00:10:36.140 21:11:30 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:36.140 21:11:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:36.140 21:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.140 21:11:30 -- common/autotest_common.sh@10 -- # set +x 00:10:36.399 ************************************ 00:10:36.399 START TEST filesystem_in_capsule_xfs 00:10:36.399 ************************************ 00:10:36.399 21:11:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:10:36.399 21:11:30 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:36.399 21:11:30 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.399 21:11:30 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:36.399 21:11:30 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:10:36.399 21:11:30 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:36.399 21:11:30 -- common/autotest_common.sh@914 -- # local i=0 00:10:36.399 21:11:30 -- common/autotest_common.sh@915 -- # local force 00:10:36.399 21:11:30 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:10:36.399 21:11:30 -- common/autotest_common.sh@920 -- # force=-f 00:10:36.399 21:11:30 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:36.399 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:36.399 = sectsz=512 attr=2, projid32bit=1 00:10:36.399 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:36.399 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:36.399 data = bsize=4096 blocks=130560, imaxpct=25 00:10:36.399 = sunit=0 swidth=0 blks 00:10:36.399 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:36.399 log =internal log bsize=4096 blocks=16384, version=2 00:10:36.399 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:36.399 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:37.338 Discarding blocks...Done. 00:10:37.338 21:11:31 -- common/autotest_common.sh@931 -- # return 0 00:10:37.339 21:11:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.893 21:11:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.893 21:11:33 -- target/filesystem.sh@25 -- # sync 00:10:39.893 21:11:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.893 21:11:33 -- target/filesystem.sh@27 -- # sync 00:10:39.893 21:11:33 -- target/filesystem.sh@29 -- # i=0 00:10:39.893 21:11:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.893 21:11:33 -- target/filesystem.sh@37 -- # kill -0 1298144 00:10:39.893 21:11:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.893 21:11:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.893 21:11:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.893 21:11:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.893 00:10:39.893 real 0m3.312s 00:10:39.893 user 0m0.019s 00:10:39.893 sys 0m0.044s 00:10:39.893 21:11:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:39.893 21:11:33 -- common/autotest_common.sh@10 -- # set +x 00:10:39.893 ************************************ 00:10:39.893 END TEST filesystem_in_capsule_xfs 00:10:39.893 ************************************ 00:10:39.893 21:11:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:39.893 21:11:34 -- target/filesystem.sh@93 -- # sync 00:10:39.893 21:11:34 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.154 21:11:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.154 21:11:34 -- common/autotest_common.sh@1205 -- # local i=0 00:10:40.154 21:11:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:40.154 21:11:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.154 21:11:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:40.154 21:11:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.154 21:11:34 -- common/autotest_common.sh@1217 -- # return 0 00:10:40.154 21:11:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.154 21:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.154 21:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:40.154 21:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.154 21:11:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:40.154 21:11:34 -- target/filesystem.sh@101 -- # killprocess 1298144 00:10:40.154 21:11:34 -- common/autotest_common.sh@936 -- # '[' -z 1298144 ']' 00:10:40.154 21:11:34 -- common/autotest_common.sh@940 -- # kill -0 1298144 00:10:40.154 21:11:34 -- common/autotest_common.sh@941 -- # uname 00:10:40.154 21:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:40.154 21:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1298144 00:10:40.154 21:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:40.154 21:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:40.154 21:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1298144' 00:10:40.154 killing process with pid 1298144 00:10:40.154 21:11:34 -- common/autotest_common.sh@955 -- # kill 1298144 00:10:40.154 21:11:34 -- common/autotest_common.sh@960 -- # wait 1298144 00:10:41.094 21:11:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:41.094 00:10:41.094 real 0m15.114s 00:10:41.094 user 0m58.557s 00:10:41.094 sys 0m1.181s 00:10:41.094 21:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.094 21:11:35 -- common/autotest_common.sh@10 -- # set +x 00:10:41.094 ************************************ 00:10:41.094 END TEST nvmf_filesystem_in_capsule 00:10:41.094 ************************************ 00:10:41.094 21:11:35 -- target/filesystem.sh@108 -- # nvmftestfini 00:10:41.094 21:11:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:41.094 21:11:35 -- nvmf/common.sh@117 -- # sync 00:10:41.094 21:11:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.094 21:11:35 -- nvmf/common.sh@120 -- # set +e 00:10:41.094 21:11:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.094 21:11:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.094 rmmod nvme_tcp 00:10:41.094 rmmod nvme_fabrics 00:10:41.094 rmmod nvme_keyring 00:10:41.094 21:11:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:41.094 21:11:35 -- nvmf/common.sh@124 -- # set -e 00:10:41.094 21:11:35 -- nvmf/common.sh@125 -- # return 0 00:10:41.094 21:11:35 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:10:41.094 21:11:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:41.094 21:11:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:41.094 21:11:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:41.094 21:11:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.094 21:11:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.094 21:11:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.094 21:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.094 21:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.631 21:11:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.631 00:10:43.631 real 0m36.631s 00:10:43.631 user 1m51.244s 00:10:43.631 sys 0m6.764s 00:10:43.631 21:11:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.631 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 ************************************ 00:10:43.631 END TEST nvmf_filesystem 00:10:43.631 ************************************ 00:10:43.631 21:11:37 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.631 21:11:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:43.631 21:11:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.631 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 ************************************ 00:10:43.631 START TEST nvmf_discovery 00:10:43.631 ************************************ 00:10:43.631 21:11:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:43.631 * Looking for test storage... 00:10:43.631 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:43.631 21:11:37 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.631 21:11:37 -- nvmf/common.sh@7 -- # uname -s 00:10:43.631 21:11:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.631 21:11:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.631 21:11:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.631 21:11:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.631 21:11:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.631 21:11:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.631 21:11:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.631 21:11:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.631 21:11:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.631 21:11:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.631 21:11:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:43.631 21:11:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:43.631 21:11:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.631 21:11:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.631 21:11:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:43.631 21:11:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.631 21:11:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:43.631 21:11:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.631 21:11:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.631 21:11:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.631 21:11:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.631 21:11:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.631 21:11:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.631 21:11:37 -- paths/export.sh@5 -- # export PATH 00:10:43.632 21:11:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.632 21:11:37 -- nvmf/common.sh@47 -- # : 0 00:10:43.632 21:11:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.632 21:11:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.632 21:11:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.632 21:11:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.632 21:11:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.632 21:11:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.632 21:11:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.632 21:11:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.632 21:11:37 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:43.632 21:11:37 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:43.632 21:11:37 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:43.632 21:11:37 -- target/discovery.sh@15 -- # hash nvme 00:10:43.632 21:11:37 -- target/discovery.sh@20 -- # nvmftestinit 00:10:43.632 21:11:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:43.632 21:11:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.632 21:11:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:43.632 21:11:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:43.632 21:11:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:43.632 21:11:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.632 21:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.632 21:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.632 21:11:37 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:10:43.632 21:11:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:43.632 21:11:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.632 21:11:37 -- common/autotest_common.sh@10 -- # set +x 00:10:48.913 21:11:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:48.913 21:11:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.913 21:11:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.913 21:11:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.913 21:11:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.913 21:11:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.913 21:11:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.913 21:11:42 -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.913 21:11:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.913 21:11:42 -- nvmf/common.sh@296 -- # e810=() 00:10:48.913 21:11:42 -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.913 21:11:42 -- nvmf/common.sh@297 -- # x722=() 00:10:48.913 21:11:42 -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.913 21:11:42 -- nvmf/common.sh@298 -- # mlx=() 00:10:48.913 21:11:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.913 21:11:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.913 21:11:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.913 21:11:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.913 21:11:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:48.913 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:48.913 21:11:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.913 21:11:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:48.913 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:48.913 21:11:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.913 21:11:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.913 21:11:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.913 21:11:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:48.913 Found net devices under 0000:27:00.0: cvl_0_0 00:10:48.913 21:11:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.913 21:11:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.913 21:11:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.913 21:11:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.913 21:11:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:48.913 Found net devices under 0000:27:00.1: cvl_0_1 00:10:48.913 21:11:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.913 21:11:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:48.913 21:11:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:48.913 21:11:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:48.913 21:11:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.913 21:11:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.913 21:11:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.913 21:11:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.913 21:11:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.913 21:11:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.913 21:11:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.913 21:11:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.913 21:11:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.913 21:11:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.913 21:11:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.913 21:11:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.913 21:11:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.913 21:11:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.913 21:11:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.913 21:11:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.913 21:11:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.913 21:11:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.913 21:11:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.913 21:11:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.866 ms 00:10:48.913 00:10:48.913 --- 10.0.0.2 ping statistics --- 00:10:48.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.913 rtt min/avg/max/mdev = 0.866/0.866/0.866/0.000 ms 00:10:48.913 21:11:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:10:48.913 00:10:48.913 --- 10.0.0.1 ping statistics --- 00:10:48.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.913 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:10:48.913 21:11:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.913 21:11:43 -- nvmf/common.sh@411 -- # return 0 00:10:48.913 21:11:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:48.913 21:11:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.913 21:11:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:48.913 21:11:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:48.913 21:11:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.913 21:11:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:48.913 21:11:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:49.176 21:11:43 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:49.176 21:11:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:49.176 21:11:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:49.176 21:11:43 -- common/autotest_common.sh@10 -- # set +x 00:10:49.176 21:11:43 -- nvmf/common.sh@470 -- # nvmfpid=1305210 00:10:49.176 21:11:43 -- nvmf/common.sh@471 -- # waitforlisten 1305210 00:10:49.176 21:11:43 -- common/autotest_common.sh@817 -- # '[' -z 1305210 ']' 00:10:49.176 21:11:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.176 21:11:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.176 21:11:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.176 21:11:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.176 21:11:43 -- common/autotest_common.sh@10 -- # set +x 00:10:49.176 21:11:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.176 [2024-04-23 21:11:43.296842] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:10:49.176 [2024-04-23 21:11:43.296969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.176 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.176 [2024-04-23 21:11:43.431746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.435 [2024-04-23 21:11:43.543343] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.435 [2024-04-23 21:11:43.543380] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.435 [2024-04-23 21:11:43.543393] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.435 [2024-04-23 21:11:43.543402] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.435 [2024-04-23 21:11:43.543409] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.436 [2024-04-23 21:11:43.543491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.436 [2024-04-23 21:11:43.543601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.436 [2024-04-23 21:11:43.543716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.436 [2024-04-23 21:11:43.543727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.004 21:11:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.004 21:11:44 -- common/autotest_common.sh@850 -- # return 0 00:10:50.004 21:11:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:50.004 21:11:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 21:11:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.004 21:11:44 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.004 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 [2024-04-23 21:11:44.041520] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.004 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.004 21:11:44 -- target/discovery.sh@26 -- # seq 1 4 00:10:50.004 21:11:44 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.004 21:11:44 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:50.004 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 Null1 00:10:50.004 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.004 21:11:44 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.004 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.004 21:11:44 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:50.004 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.004 21:11:44 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.004 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.004 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.004 [2024-04-23 21:11:44.089759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.004 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.005 21:11:44 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 Null2 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.005 21:11:44 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 Null3 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.005 21:11:44 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 Null4 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.005 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.005 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.005 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.005 21:11:44 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.266 00:10:50.266 Discovery Log Number of Records 6, Generation counter 6 00:10:50.266 =====Discovery Log Entry 0====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: current discovery subsystem 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4420 00:10:50.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: explicit discovery connections, duplicate discovery information 00:10:50.266 sectype: none 00:10:50.266 =====Discovery Log Entry 1====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: nvme subsystem 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4420 00:10:50.266 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: none 00:10:50.266 sectype: none 00:10:50.266 =====Discovery Log Entry 2====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: nvme subsystem 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4420 00:10:50.266 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: none 00:10:50.266 sectype: none 00:10:50.266 =====Discovery Log Entry 3====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: nvme subsystem 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4420 00:10:50.266 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: none 00:10:50.266 sectype: none 00:10:50.266 =====Discovery Log Entry 4====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: nvme subsystem 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4420 00:10:50.266 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: none 00:10:50.266 sectype: none 00:10:50.266 =====Discovery Log Entry 5====== 00:10:50.266 trtype: tcp 00:10:50.266 adrfam: ipv4 00:10:50.266 subtype: discovery subsystem referral 00:10:50.266 treq: not required 00:10:50.266 portid: 0 00:10:50.266 trsvcid: 4430 00:10:50.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:50.266 traddr: 10.0.0.2 00:10:50.266 eflags: none 00:10:50.266 sectype: none 00:10:50.266 21:11:44 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:50.266 Perform nvmf subsystem discovery via RPC 00:10:50.266 21:11:44 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:50.266 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.266 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 [2024-04-23 21:11:44.361857] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:50.266 [ 00:10:50.266 { 00:10:50.266 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:50.266 "subtype": "Discovery", 00:10:50.266 "listen_addresses": [ 00:10:50.266 { 00:10:50.267 "transport": "TCP", 00:10:50.267 "trtype": "TCP", 00:10:50.267 "adrfam": "IPv4", 00:10:50.267 "traddr": "10.0.0.2", 00:10:50.267 "trsvcid": "4420" 00:10:50.267 } 00:10:50.267 ], 00:10:50.267 "allow_any_host": true, 00:10:50.267 "hosts": [] 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.267 "subtype": "NVMe", 00:10:50.267 "listen_addresses": [ 00:10:50.267 { 00:10:50.267 "transport": "TCP", 00:10:50.267 "trtype": "TCP", 00:10:50.267 "adrfam": "IPv4", 00:10:50.267 "traddr": "10.0.0.2", 00:10:50.267 "trsvcid": "4420" 00:10:50.267 } 00:10:50.267 ], 00:10:50.267 "allow_any_host": true, 00:10:50.267 "hosts": [], 00:10:50.267 "serial_number": "SPDK00000000000001", 00:10:50.267 "model_number": "SPDK bdev Controller", 00:10:50.267 "max_namespaces": 32, 00:10:50.267 "min_cntlid": 1, 00:10:50.267 "max_cntlid": 65519, 00:10:50.267 "namespaces": [ 00:10:50.267 { 00:10:50.267 "nsid": 1, 00:10:50.267 "bdev_name": "Null1", 00:10:50.267 "name": "Null1", 00:10:50.267 "nguid": "DDFE340540BE429FB54B04430CACEABE", 00:10:50.267 "uuid": "ddfe3405-40be-429f-b54b-04430caceabe" 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:50.267 "subtype": "NVMe", 00:10:50.267 "listen_addresses": [ 00:10:50.267 { 00:10:50.267 "transport": "TCP", 00:10:50.267 "trtype": "TCP", 00:10:50.267 "adrfam": "IPv4", 00:10:50.267 "traddr": "10.0.0.2", 00:10:50.267 "trsvcid": "4420" 00:10:50.267 } 00:10:50.267 ], 00:10:50.267 "allow_any_host": true, 00:10:50.267 "hosts": [], 00:10:50.267 "serial_number": "SPDK00000000000002", 00:10:50.267 "model_number": "SPDK bdev Controller", 00:10:50.267 "max_namespaces": 32, 00:10:50.267 "min_cntlid": 1, 00:10:50.267 "max_cntlid": 65519, 00:10:50.267 "namespaces": [ 00:10:50.267 { 00:10:50.267 "nsid": 1, 00:10:50.267 "bdev_name": "Null2", 00:10:50.267 "name": "Null2", 00:10:50.267 "nguid": "38913BE395324B0AB0F73A0B5DD36BED", 00:10:50.267 "uuid": "38913be3-9532-4b0a-b0f7-3a0b5dd36bed" 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:50.267 "subtype": "NVMe", 00:10:50.267 "listen_addresses": [ 00:10:50.267 { 00:10:50.267 "transport": "TCP", 00:10:50.267 "trtype": "TCP", 00:10:50.267 "adrfam": "IPv4", 00:10:50.267 "traddr": "10.0.0.2", 00:10:50.267 "trsvcid": "4420" 00:10:50.267 } 00:10:50.267 ], 00:10:50.267 "allow_any_host": true, 00:10:50.267 "hosts": [], 00:10:50.267 "serial_number": "SPDK00000000000003", 00:10:50.267 "model_number": "SPDK bdev Controller", 00:10:50.267 "max_namespaces": 32, 00:10:50.267 "min_cntlid": 1, 00:10:50.267 "max_cntlid": 65519, 00:10:50.267 "namespaces": [ 00:10:50.267 { 00:10:50.267 "nsid": 1, 00:10:50.267 "bdev_name": "Null3", 00:10:50.267 "name": "Null3", 00:10:50.267 "nguid": "61FCCA49EEF04928BF36CC508A4A3218", 00:10:50.267 "uuid": "61fcca49-eef0-4928-bf36-cc508a4a3218" 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 }, 00:10:50.267 { 00:10:50.267 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:50.267 "subtype": "NVMe", 00:10:50.267 "listen_addresses": [ 00:10:50.267 { 00:10:50.267 "transport": "TCP", 00:10:50.267 "trtype": "TCP", 00:10:50.267 "adrfam": "IPv4", 00:10:50.267 "traddr": "10.0.0.2", 00:10:50.267 "trsvcid": "4420" 00:10:50.267 } 00:10:50.267 ], 00:10:50.267 "allow_any_host": true, 00:10:50.267 "hosts": [], 00:10:50.267 "serial_number": "SPDK00000000000004", 00:10:50.267 "model_number": "SPDK bdev Controller", 00:10:50.267 "max_namespaces": 32, 00:10:50.267 "min_cntlid": 1, 00:10:50.267 "max_cntlid": 65519, 00:10:50.267 "namespaces": [ 00:10:50.267 { 00:10:50.267 "nsid": 1, 00:10:50.267 "bdev_name": "Null4", 00:10:50.267 "name": "Null4", 00:10:50.267 "nguid": "ECDC83D87F264E6AB19F7583F10C0A2D", 00:10:50.267 "uuid": "ecdc83d8-7f26-4e6a-b19f-7583f10c0a2d" 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 } 00:10:50.267 ] 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@42 -- # seq 1 4 00:10:50.267 21:11:44 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.267 21:11:44 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.267 21:11:44 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.267 21:11:44 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:50.267 21:11:44 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:50.267 21:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.267 21:11:44 -- common/autotest_common.sh@10 -- # set +x 00:10:50.267 21:11:44 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:50.267 21:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.267 21:11:44 -- target/discovery.sh@49 -- # check_bdevs= 00:10:50.267 21:11:44 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:50.267 21:11:44 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:50.267 21:11:44 -- target/discovery.sh@57 -- # nvmftestfini 00:10:50.267 21:11:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:50.267 21:11:44 -- nvmf/common.sh@117 -- # sync 00:10:50.267 21:11:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.267 21:11:44 -- nvmf/common.sh@120 -- # set +e 00:10:50.267 21:11:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.267 21:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.267 rmmod nvme_tcp 00:10:50.267 rmmod nvme_fabrics 00:10:50.529 rmmod nvme_keyring 00:10:50.529 21:11:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.529 21:11:44 -- nvmf/common.sh@124 -- # set -e 00:10:50.529 21:11:44 -- nvmf/common.sh@125 -- # return 0 00:10:50.529 21:11:44 -- nvmf/common.sh@478 -- # '[' -n 1305210 ']' 00:10:50.529 21:11:44 -- nvmf/common.sh@479 -- # killprocess 1305210 00:10:50.529 21:11:44 -- common/autotest_common.sh@936 -- # '[' -z 1305210 ']' 00:10:50.529 21:11:44 -- common/autotest_common.sh@940 -- # kill -0 1305210 00:10:50.529 21:11:44 -- common/autotest_common.sh@941 -- # uname 00:10:50.529 21:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.529 21:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1305210 00:10:50.529 21:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:50.529 21:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:50.529 21:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1305210' 00:10:50.529 killing process with pid 1305210 00:10:50.529 21:11:44 -- common/autotest_common.sh@955 -- # kill 1305210 00:10:50.529 [2024-04-23 21:11:44.622161] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:50.529 21:11:44 -- common/autotest_common.sh@960 -- # wait 1305210 00:10:51.100 21:11:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:51.100 21:11:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:51.100 21:11:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:51.100 21:11:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.100 21:11:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.100 21:11:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.100 21:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.100 21:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.017 21:11:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.017 00:10:53.017 real 0m9.631s 00:10:53.017 user 0m7.405s 00:10:53.017 sys 0m4.490s 00:10:53.017 21:11:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.017 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.017 ************************************ 00:10:53.017 END TEST nvmf_discovery 00:10:53.017 ************************************ 00:10:53.017 21:11:47 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.017 21:11:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.017 21:11:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.017 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:10:53.280 ************************************ 00:10:53.280 START TEST nvmf_referrals 00:10:53.280 ************************************ 00:10:53.280 21:11:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.280 * Looking for test storage... 00:10:53.280 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:53.280 21:11:47 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.280 21:11:47 -- nvmf/common.sh@7 -- # uname -s 00:10:53.280 21:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.280 21:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.280 21:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.280 21:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.280 21:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.280 21:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.280 21:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.280 21:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.280 21:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.280 21:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.280 21:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:53.280 21:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:10:53.280 21:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.280 21:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.280 21:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:53.280 21:11:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.280 21:11:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:53.280 21:11:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.280 21:11:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.280 21:11:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.280 21:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.280 21:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.280 21:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.280 21:11:47 -- paths/export.sh@5 -- # export PATH 00:10:53.280 21:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.280 21:11:47 -- nvmf/common.sh@47 -- # : 0 00:10:53.280 21:11:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.280 21:11:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.280 21:11:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.280 21:11:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.280 21:11:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.280 21:11:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.280 21:11:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.280 21:11:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.280 21:11:47 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:53.280 21:11:47 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:53.280 21:11:47 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:53.280 21:11:47 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:53.280 21:11:47 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:53.280 21:11:47 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:53.280 21:11:47 -- target/referrals.sh@37 -- # nvmftestinit 00:10:53.280 21:11:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:53.280 21:11:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.280 21:11:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:53.280 21:11:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:53.280 21:11:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:53.280 21:11:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.280 21:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.280 21:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.280 21:11:47 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:10:53.280 21:11:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:53.280 21:11:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.280 21:11:47 -- common/autotest_common.sh@10 -- # set +x 00:10:58.566 21:11:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:58.566 21:11:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:58.566 21:11:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:58.566 21:11:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:58.566 21:11:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:58.566 21:11:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:58.566 21:11:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:58.566 21:11:52 -- nvmf/common.sh@295 -- # net_devs=() 00:10:58.566 21:11:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:58.566 21:11:52 -- nvmf/common.sh@296 -- # e810=() 00:10:58.566 21:11:52 -- nvmf/common.sh@296 -- # local -ga e810 00:10:58.566 21:11:52 -- nvmf/common.sh@297 -- # x722=() 00:10:58.566 21:11:52 -- nvmf/common.sh@297 -- # local -ga x722 00:10:58.567 21:11:52 -- nvmf/common.sh@298 -- # mlx=() 00:10:58.567 21:11:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:58.567 21:11:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.567 21:11:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:58.567 21:11:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.567 21:11:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:58.567 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:58.567 21:11:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:58.567 21:11:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:58.567 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:58.567 21:11:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.567 21:11:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.567 21:11:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.567 21:11:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:58.567 Found net devices under 0000:27:00.0: cvl_0_0 00:10:58.567 21:11:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.567 21:11:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:58.567 21:11:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.567 21:11:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.567 21:11:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:58.567 Found net devices under 0000:27:00.1: cvl_0_1 00:10:58.567 21:11:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.567 21:11:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:58.567 21:11:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:58.567 21:11:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:58.567 21:11:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.567 21:11:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.567 21:11:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.567 21:11:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:58.567 21:11:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.567 21:11:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.567 21:11:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:58.567 21:11:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.567 21:11:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.567 21:11:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:58.567 21:11:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:58.567 21:11:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.567 21:11:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.567 21:11:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.567 21:11:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.567 21:11:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:58.567 21:11:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.827 21:11:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.827 21:11:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.827 21:11:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:58.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:58.827 00:10:58.827 --- 10.0.0.2 ping statistics --- 00:10:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.827 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:58.827 21:11:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:10:58.827 00:10:58.827 --- 10.0.0.1 ping statistics --- 00:10:58.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.827 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:58.827 21:11:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.827 21:11:52 -- nvmf/common.sh@411 -- # return 0 00:10:58.827 21:11:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:58.827 21:11:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.827 21:11:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:58.827 21:11:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:58.827 21:11:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.827 21:11:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:58.827 21:11:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:58.827 21:11:52 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:58.827 21:11:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:58.827 21:11:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:58.827 21:11:52 -- common/autotest_common.sh@10 -- # set +x 00:10:58.827 21:11:52 -- nvmf/common.sh@470 -- # nvmfpid=1309705 00:10:58.827 21:11:52 -- nvmf/common.sh@471 -- # waitforlisten 1309705 00:10:58.827 21:11:52 -- common/autotest_common.sh@817 -- # '[' -z 1309705 ']' 00:10:58.827 21:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.827 21:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:58.827 21:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.827 21:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:58.827 21:11:52 -- common/autotest_common.sh@10 -- # set +x 00:10:58.827 21:11:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.827 [2024-04-23 21:11:52.988279] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:10:58.827 [2024-04-23 21:11:52.988385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.827 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.088 [2024-04-23 21:11:53.109465] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.088 [2024-04-23 21:11:53.208773] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.088 [2024-04-23 21:11:53.208809] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.088 [2024-04-23 21:11:53.208820] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.088 [2024-04-23 21:11:53.208829] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.088 [2024-04-23 21:11:53.208836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.088 [2024-04-23 21:11:53.208918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.088 [2024-04-23 21:11:53.209037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.088 [2024-04-23 21:11:53.209136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.088 [2024-04-23 21:11:53.209147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.661 21:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:59.661 21:11:53 -- common/autotest_common.sh@850 -- # return 0 00:10:59.661 21:11:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:59.661 21:11:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.661 21:11:53 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 [2024-04-23 21:11:53.748418] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 [2024-04-23 21:11:53.764643] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- target/referrals.sh@48 -- # jq length 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:59.661 21:11:53 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:59.661 21:11:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:59.661 21:11:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:59.661 21:11:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:59.661 21:11:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.661 21:11:53 -- common/autotest_common.sh@10 -- # set +x 00:10:59.661 21:11:53 -- target/referrals.sh@21 -- # sort 00:10:59.661 21:11:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:59.661 21:11:53 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:59.661 21:11:53 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:59.661 21:11:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:59.661 21:11:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:59.661 21:11:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:59.661 21:11:53 -- target/referrals.sh@26 -- # sort 00:10:59.661 21:11:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:59.920 21:11:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:59.920 21:11:54 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:59.920 21:11:54 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:59.920 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.920 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.920 21:11:54 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:59.920 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.920 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.920 21:11:54 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:59.920 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.920 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.920 21:11:54 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:59.920 21:11:54 -- target/referrals.sh@56 -- # jq length 00:10:59.920 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.920 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:10:59.920 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.920 21:11:54 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:59.920 21:11:54 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:59.920 21:11:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:59.920 21:11:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:59.920 21:11:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:59.920 21:11:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:59.920 21:11:54 -- target/referrals.sh@26 -- # sort 00:11:00.179 21:11:54 -- target/referrals.sh@26 -- # echo 00:11:00.179 21:11:54 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:00.179 21:11:54 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:00.179 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.179 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.179 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.179 21:11:54 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:00.179 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.179 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.179 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.179 21:11:54 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:00.179 21:11:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:00.179 21:11:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.179 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.179 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.179 21:11:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:00.179 21:11:54 -- target/referrals.sh@21 -- # sort 00:11:00.179 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.179 21:11:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:00.179 21:11:54 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:00.179 21:11:54 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:00.179 21:11:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:00.179 21:11:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:00.179 21:11:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.179 21:11:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:00.179 21:11:54 -- target/referrals.sh@26 -- # sort 00:11:00.179 21:11:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:00.179 21:11:54 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:00.179 21:11:54 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:00.179 21:11:54 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:00.179 21:11:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:00.179 21:11:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.179 21:11:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:00.439 21:11:54 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:00.439 21:11:54 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:00.439 21:11:54 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:00.439 21:11:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:00.439 21:11:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.439 21:11:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:00.439 21:11:54 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:00.439 21:11:54 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:00.439 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.439 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.439 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.439 21:11:54 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:00.439 21:11:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:00.439 21:11:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.439 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.439 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.439 21:11:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:00.439 21:11:54 -- target/referrals.sh@21 -- # sort 00:11:00.439 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.439 21:11:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:00.439 21:11:54 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:00.439 21:11:54 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:00.439 21:11:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:00.439 21:11:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:00.439 21:11:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.439 21:11:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:00.439 21:11:54 -- target/referrals.sh@26 -- # sort 00:11:00.439 21:11:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:00.439 21:11:54 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:00.439 21:11:54 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:00.439 21:11:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:00.439 21:11:54 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:00.439 21:11:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.439 21:11:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:00.700 21:11:54 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:00.700 21:11:54 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:00.700 21:11:54 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:00.700 21:11:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:00.700 21:11:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.700 21:11:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:00.700 21:11:54 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:00.700 21:11:54 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:00.700 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.700 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.700 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.700 21:11:54 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:00.700 21:11:54 -- target/referrals.sh@82 -- # jq length 00:11:00.700 21:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.700 21:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:00.961 21:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.961 21:11:54 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:00.961 21:11:54 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:00.961 21:11:55 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:00.961 21:11:55 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:00.961 21:11:55 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:00.961 21:11:55 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:00.961 21:11:55 -- target/referrals.sh@26 -- # sort 00:11:00.961 21:11:55 -- target/referrals.sh@26 -- # echo 00:11:00.961 21:11:55 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:00.961 21:11:55 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:00.961 21:11:55 -- target/referrals.sh@86 -- # nvmftestfini 00:11:00.961 21:11:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:00.961 21:11:55 -- nvmf/common.sh@117 -- # sync 00:11:00.961 21:11:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.961 21:11:55 -- nvmf/common.sh@120 -- # set +e 00:11:00.961 21:11:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.961 21:11:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.961 rmmod nvme_tcp 00:11:00.961 rmmod nvme_fabrics 00:11:00.961 rmmod nvme_keyring 00:11:00.961 21:11:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.961 21:11:55 -- nvmf/common.sh@124 -- # set -e 00:11:00.961 21:11:55 -- nvmf/common.sh@125 -- # return 0 00:11:00.961 21:11:55 -- nvmf/common.sh@478 -- # '[' -n 1309705 ']' 00:11:00.961 21:11:55 -- nvmf/common.sh@479 -- # killprocess 1309705 00:11:00.961 21:11:55 -- common/autotest_common.sh@936 -- # '[' -z 1309705 ']' 00:11:00.962 21:11:55 -- common/autotest_common.sh@940 -- # kill -0 1309705 00:11:00.962 21:11:55 -- common/autotest_common.sh@941 -- # uname 00:11:00.962 21:11:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:00.962 21:11:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1309705 00:11:00.962 21:11:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:00.962 21:11:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:00.962 21:11:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1309705' 00:11:00.962 killing process with pid 1309705 00:11:00.962 21:11:55 -- common/autotest_common.sh@955 -- # kill 1309705 00:11:00.962 21:11:55 -- common/autotest_common.sh@960 -- # wait 1309705 00:11:01.532 21:11:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:01.532 21:11:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:01.532 21:11:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:01.532 21:11:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.532 21:11:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.532 21:11:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.532 21:11:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.532 21:11:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.077 21:11:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:04.077 00:11:04.077 real 0m10.447s 00:11:04.077 user 0m11.529s 00:11:04.077 sys 0m4.693s 00:11:04.077 21:11:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:04.077 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:11:04.077 ************************************ 00:11:04.077 END TEST nvmf_referrals 00:11:04.077 ************************************ 00:11:04.077 21:11:57 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:04.077 21:11:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:04.077 21:11:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:04.077 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:11:04.077 ************************************ 00:11:04.077 START TEST nvmf_connect_disconnect 00:11:04.077 ************************************ 00:11:04.077 21:11:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:04.077 * Looking for test storage... 00:11:04.077 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:04.077 21:11:57 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.077 21:11:57 -- nvmf/common.sh@7 -- # uname -s 00:11:04.077 21:11:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.077 21:11:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.077 21:11:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.077 21:11:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.077 21:11:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.077 21:11:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.077 21:11:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.077 21:11:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.077 21:11:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.077 21:11:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.077 21:11:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:04.077 21:11:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:11:04.077 21:11:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.077 21:11:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.077 21:11:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:04.077 21:11:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.077 21:11:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:04.077 21:11:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.077 21:11:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.077 21:11:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.078 21:11:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.078 21:11:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.078 21:11:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.078 21:11:57 -- paths/export.sh@5 -- # export PATH 00:11:04.078 21:11:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.078 21:11:57 -- nvmf/common.sh@47 -- # : 0 00:11:04.078 21:11:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.078 21:11:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.078 21:11:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.078 21:11:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.078 21:11:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.078 21:11:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.078 21:11:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.078 21:11:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.078 21:11:57 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.078 21:11:57 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.078 21:11:57 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:04.078 21:11:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:04.078 21:11:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.078 21:11:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:04.078 21:11:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:04.078 21:11:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:04.078 21:11:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.078 21:11:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.078 21:11:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.078 21:11:57 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:04.078 21:11:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:04.078 21:11:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.078 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:11:09.361 21:12:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:09.361 21:12:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.361 21:12:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.361 21:12:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.361 21:12:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.361 21:12:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.361 21:12:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.361 21:12:03 -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.361 21:12:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.361 21:12:03 -- nvmf/common.sh@296 -- # e810=() 00:11:09.361 21:12:03 -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.361 21:12:03 -- nvmf/common.sh@297 -- # x722=() 00:11:09.361 21:12:03 -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.361 21:12:03 -- nvmf/common.sh@298 -- # mlx=() 00:11:09.361 21:12:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.361 21:12:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.361 21:12:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.361 21:12:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.361 21:12:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:09.361 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:09.361 21:12:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.361 21:12:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:09.361 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:09.361 21:12:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.361 21:12:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.361 21:12:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.361 21:12:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:09.361 Found net devices under 0000:27:00.0: cvl_0_0 00:11:09.361 21:12:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.361 21:12:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.361 21:12:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.361 21:12:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.361 21:12:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:09.361 Found net devices under 0000:27:00.1: cvl_0_1 00:11:09.361 21:12:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.361 21:12:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:09.361 21:12:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:09.361 21:12:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:09.361 21:12:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.361 21:12:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.361 21:12:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.361 21:12:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.361 21:12:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.362 21:12:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.362 21:12:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.362 21:12:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.362 21:12:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.362 21:12:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.362 21:12:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.362 21:12:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.362 21:12:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.362 21:12:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.362 21:12:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.362 21:12:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.362 21:12:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.362 21:12:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.362 21:12:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.362 21:12:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:11:09.362 00:11:09.362 --- 10.0.0.2 ping statistics --- 00:11:09.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.362 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:11:09.362 21:12:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:09.362 00:11:09.362 --- 10.0.0.1 ping statistics --- 00:11:09.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.362 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:09.362 21:12:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.362 21:12:03 -- nvmf/common.sh@411 -- # return 0 00:11:09.362 21:12:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:09.362 21:12:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.362 21:12:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:09.362 21:12:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:09.362 21:12:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.362 21:12:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:09.362 21:12:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:09.362 21:12:03 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:09.362 21:12:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:09.362 21:12:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:09.362 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:09.362 21:12:03 -- nvmf/common.sh@470 -- # nvmfpid=1314387 00:11:09.362 21:12:03 -- nvmf/common.sh@471 -- # waitforlisten 1314387 00:11:09.362 21:12:03 -- common/autotest_common.sh@817 -- # '[' -z 1314387 ']' 00:11:09.362 21:12:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.362 21:12:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:09.362 21:12:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.362 21:12:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:09.362 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:09.362 21:12:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.362 [2024-04-23 21:12:03.556696] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:11:09.362 [2024-04-23 21:12:03.556796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.362 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.624 [2024-04-23 21:12:03.675162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.624 [2024-04-23 21:12:03.773293] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.624 [2024-04-23 21:12:03.773329] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.624 [2024-04-23 21:12:03.773341] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.624 [2024-04-23 21:12:03.773350] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.624 [2024-04-23 21:12:03.773358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.624 [2024-04-23 21:12:03.773426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.624 [2024-04-23 21:12:03.773525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.624 [2024-04-23 21:12:03.773624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.624 [2024-04-23 21:12:03.773646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.195 21:12:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:10.195 21:12:04 -- common/autotest_common.sh@850 -- # return 0 00:11:10.195 21:12:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:10.195 21:12:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 21:12:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:10.195 21:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 [2024-04-23 21:12:04.314858] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.195 21:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:10.195 21:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 21:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.195 21:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 21:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.195 21:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 21:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.195 21:12:04 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.195 21:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:10.195 21:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:10.195 [2024-04-23 21:12:04.384789] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.196 21:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:10.196 21:12:04 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:10.196 21:12:04 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:10.196 21:12:04 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:10.196 21:12:04 -- target/connect_disconnect.sh@34 -- # set +x 00:11:12.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.161 21:15:54 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:00.161 21:15:54 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:00.161 21:15:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:00.161 21:15:54 -- nvmf/common.sh@117 -- # sync 00:15:00.161 21:15:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.161 21:15:54 -- nvmf/common.sh@120 -- # set +e 00:15:00.161 21:15:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.161 21:15:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.161 rmmod nvme_tcp 00:15:00.161 rmmod nvme_fabrics 00:15:00.161 rmmod nvme_keyring 00:15:00.421 21:15:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.421 21:15:54 -- nvmf/common.sh@124 -- # set -e 00:15:00.421 21:15:54 -- nvmf/common.sh@125 -- # return 0 00:15:00.421 21:15:54 -- nvmf/common.sh@478 -- # '[' -n 1314387 ']' 00:15:00.421 21:15:54 -- nvmf/common.sh@479 -- # killprocess 1314387 00:15:00.421 21:15:54 -- common/autotest_common.sh@936 -- # '[' -z 1314387 ']' 00:15:00.421 21:15:54 -- common/autotest_common.sh@940 -- # kill -0 1314387 00:15:00.421 21:15:54 -- common/autotest_common.sh@941 -- # uname 00:15:00.421 21:15:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:00.421 21:15:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1314387 00:15:00.422 21:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:00.422 21:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:00.422 21:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1314387' 00:15:00.422 killing process with pid 1314387 00:15:00.422 21:15:54 -- common/autotest_common.sh@955 -- # kill 1314387 00:15:00.422 21:15:54 -- common/autotest_common.sh@960 -- # wait 1314387 00:15:00.989 21:15:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:00.989 21:15:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:00.989 21:15:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:00.989 21:15:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.989 21:15:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.989 21:15:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.989 21:15:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.989 21:15:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.897 21:15:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.897 00:15:02.897 real 3m59.209s 00:15:02.897 user 15m18.849s 00:15:02.897 sys 0m14.123s 00:15:02.897 21:15:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.897 21:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:02.897 ************************************ 00:15:02.897 END TEST nvmf_connect_disconnect 00:15:02.897 ************************************ 00:15:02.897 21:15:57 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:02.897 21:15:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.897 21:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.897 21:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:03.158 ************************************ 00:15:03.158 START TEST nvmf_multitarget 00:15:03.158 ************************************ 00:15:03.158 21:15:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:03.158 * Looking for test storage... 00:15:03.158 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:03.158 21:15:57 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.158 21:15:57 -- nvmf/common.sh@7 -- # uname -s 00:15:03.158 21:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.158 21:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.158 21:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.158 21:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.158 21:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.158 21:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.158 21:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.158 21:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.158 21:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.158 21:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.158 21:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:03.158 21:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:03.158 21:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.158 21:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.158 21:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:03.158 21:15:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.158 21:15:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:03.158 21:15:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.158 21:15:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.158 21:15:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.158 21:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.158 21:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.158 21:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.158 21:15:57 -- paths/export.sh@5 -- # export PATH 00:15:03.158 21:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.158 21:15:57 -- nvmf/common.sh@47 -- # : 0 00:15:03.158 21:15:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.158 21:15:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.158 21:15:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.158 21:15:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.158 21:15:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.158 21:15:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.158 21:15:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.158 21:15:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.159 21:15:57 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:03.159 21:15:57 -- target/multitarget.sh@15 -- # nvmftestinit 00:15:03.159 21:15:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:03.159 21:15:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.159 21:15:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:03.159 21:15:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:03.159 21:15:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:03.159 21:15:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.159 21:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.159 21:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.159 21:15:57 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:03.159 21:15:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:03.159 21:15:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:03.159 21:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:08.440 21:16:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.440 21:16:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.440 21:16:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.440 21:16:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.440 21:16:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.440 21:16:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.440 21:16:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.440 21:16:02 -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.440 21:16:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.440 21:16:02 -- nvmf/common.sh@296 -- # e810=() 00:15:08.440 21:16:02 -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.440 21:16:02 -- nvmf/common.sh@297 -- # x722=() 00:15:08.440 21:16:02 -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.440 21:16:02 -- nvmf/common.sh@298 -- # mlx=() 00:15:08.440 21:16:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.440 21:16:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.440 21:16:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.440 21:16:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.440 21:16:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:08.440 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:08.440 21:16:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.440 21:16:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:08.440 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:08.440 21:16:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.440 21:16:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.440 21:16:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.440 21:16:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:08.440 Found net devices under 0000:27:00.0: cvl_0_0 00:15:08.440 21:16:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.440 21:16:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.440 21:16:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.440 21:16:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.440 21:16:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:08.440 Found net devices under 0000:27:00.1: cvl_0_1 00:15:08.440 21:16:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.440 21:16:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:08.440 21:16:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:08.440 21:16:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:08.440 21:16:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.440 21:16:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.440 21:16:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.440 21:16:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:08.440 21:16:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.440 21:16:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.440 21:16:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:08.440 21:16:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.440 21:16:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.440 21:16:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:08.440 21:16:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:08.440 21:16:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.440 21:16:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.440 21:16:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.441 21:16:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.441 21:16:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:08.441 21:16:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.702 21:16:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.702 21:16:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.702 21:16:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:08.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:15:08.702 00:15:08.702 --- 10.0.0.2 ping statistics --- 00:15:08.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.702 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:15:08.702 21:16:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:15:08.702 00:15:08.702 --- 10.0.0.1 ping statistics --- 00:15:08.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.702 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:08.702 21:16:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.702 21:16:02 -- nvmf/common.sh@411 -- # return 0 00:15:08.702 21:16:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:08.702 21:16:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.702 21:16:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:08.702 21:16:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:08.702 21:16:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.702 21:16:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:08.702 21:16:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:08.702 21:16:02 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:08.702 21:16:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:08.702 21:16:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.702 21:16:02 -- common/autotest_common.sh@10 -- # set +x 00:15:08.702 21:16:02 -- nvmf/common.sh@470 -- # nvmfpid=1364529 00:15:08.702 21:16:02 -- nvmf/common.sh@471 -- # waitforlisten 1364529 00:15:08.702 21:16:02 -- common/autotest_common.sh@817 -- # '[' -z 1364529 ']' 00:15:08.702 21:16:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.702 21:16:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.702 21:16:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.702 21:16:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.702 21:16:02 -- common/autotest_common.sh@10 -- # set +x 00:15:08.702 21:16:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.961 [2024-04-23 21:16:02.981219] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:15:08.961 [2024-04-23 21:16:02.981347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.961 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.961 [2024-04-23 21:16:03.119080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.961 [2024-04-23 21:16:03.213638] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.961 [2024-04-23 21:16:03.213682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.961 [2024-04-23 21:16:03.213694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.961 [2024-04-23 21:16:03.213704] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.961 [2024-04-23 21:16:03.213711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.961 [2024-04-23 21:16:03.213770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.961 [2024-04-23 21:16:03.213875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.961 [2024-04-23 21:16:03.213975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.961 [2024-04-23 21:16:03.213986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.528 21:16:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.529 21:16:03 -- common/autotest_common.sh@850 -- # return 0 00:15:09.529 21:16:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:09.529 21:16:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:09.529 21:16:03 -- common/autotest_common.sh@10 -- # set +x 00:15:09.529 21:16:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.529 21:16:03 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:09.529 21:16:03 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:09.529 21:16:03 -- target/multitarget.sh@21 -- # jq length 00:15:09.790 21:16:03 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:09.790 21:16:03 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:09.790 "nvmf_tgt_1" 00:15:09.790 21:16:03 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:09.790 "nvmf_tgt_2" 00:15:09.790 21:16:03 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:09.790 21:16:03 -- target/multitarget.sh@28 -- # jq length 00:15:09.790 21:16:04 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:09.790 21:16:04 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:10.052 true 00:15:10.052 21:16:04 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:10.052 true 00:15:10.052 21:16:04 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:10.052 21:16:04 -- target/multitarget.sh@35 -- # jq length 00:15:10.052 21:16:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:10.052 21:16:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:10.052 21:16:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:10.052 21:16:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:10.314 21:16:04 -- nvmf/common.sh@117 -- # sync 00:15:10.314 21:16:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.314 21:16:04 -- nvmf/common.sh@120 -- # set +e 00:15:10.314 21:16:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.314 21:16:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.314 rmmod nvme_tcp 00:15:10.314 rmmod nvme_fabrics 00:15:10.314 rmmod nvme_keyring 00:15:10.314 21:16:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.314 21:16:04 -- nvmf/common.sh@124 -- # set -e 00:15:10.314 21:16:04 -- nvmf/common.sh@125 -- # return 0 00:15:10.314 21:16:04 -- nvmf/common.sh@478 -- # '[' -n 1364529 ']' 00:15:10.314 21:16:04 -- nvmf/common.sh@479 -- # killprocess 1364529 00:15:10.314 21:16:04 -- common/autotest_common.sh@936 -- # '[' -z 1364529 ']' 00:15:10.314 21:16:04 -- common/autotest_common.sh@940 -- # kill -0 1364529 00:15:10.314 21:16:04 -- common/autotest_common.sh@941 -- # uname 00:15:10.314 21:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.314 21:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1364529 00:15:10.314 21:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.314 21:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.314 21:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1364529' 00:15:10.314 killing process with pid 1364529 00:15:10.314 21:16:04 -- common/autotest_common.sh@955 -- # kill 1364529 00:15:10.314 21:16:04 -- common/autotest_common.sh@960 -- # wait 1364529 00:15:10.882 21:16:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:10.882 21:16:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:10.882 21:16:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:10.882 21:16:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.882 21:16:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.882 21:16:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.882 21:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.882 21:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.786 21:16:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:12.786 00:15:12.786 real 0m9.758s 00:15:12.786 user 0m8.566s 00:15:12.787 sys 0m4.616s 00:15:12.787 21:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:12.787 21:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:12.787 ************************************ 00:15:12.787 END TEST nvmf_multitarget 00:15:12.787 ************************************ 00:15:12.787 21:16:06 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:12.787 21:16:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:12.787 21:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.787 21:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:13.048 ************************************ 00:15:13.048 START TEST nvmf_rpc 00:15:13.048 ************************************ 00:15:13.048 21:16:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:13.048 * Looking for test storage... 00:15:13.048 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:13.048 21:16:07 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.048 21:16:07 -- nvmf/common.sh@7 -- # uname -s 00:15:13.048 21:16:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.048 21:16:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.048 21:16:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.048 21:16:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.048 21:16:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.048 21:16:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.048 21:16:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.048 21:16:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.048 21:16:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.048 21:16:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.048 21:16:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:13.048 21:16:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:13.048 21:16:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.048 21:16:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.048 21:16:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:13.048 21:16:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.048 21:16:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:13.048 21:16:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.048 21:16:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.048 21:16:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.048 21:16:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.048 21:16:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.048 21:16:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.048 21:16:07 -- paths/export.sh@5 -- # export PATH 00:15:13.048 21:16:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.048 21:16:07 -- nvmf/common.sh@47 -- # : 0 00:15:13.048 21:16:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.048 21:16:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.048 21:16:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.048 21:16:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.048 21:16:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.048 21:16:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.048 21:16:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.048 21:16:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.048 21:16:07 -- target/rpc.sh@11 -- # loops=5 00:15:13.048 21:16:07 -- target/rpc.sh@23 -- # nvmftestinit 00:15:13.048 21:16:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:13.048 21:16:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.048 21:16:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:13.048 21:16:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:13.048 21:16:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:13.048 21:16:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.048 21:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.048 21:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.048 21:16:07 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:13.048 21:16:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:13.048 21:16:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.048 21:16:07 -- common/autotest_common.sh@10 -- # set +x 00:15:18.328 21:16:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:18.328 21:16:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.328 21:16:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.328 21:16:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.328 21:16:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.328 21:16:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.328 21:16:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.328 21:16:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.328 21:16:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.328 21:16:12 -- nvmf/common.sh@296 -- # e810=() 00:15:18.328 21:16:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.328 21:16:12 -- nvmf/common.sh@297 -- # x722=() 00:15:18.328 21:16:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.328 21:16:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:18.328 21:16:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.328 21:16:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.328 21:16:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.328 21:16:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.328 21:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.328 21:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:18.328 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:18.328 21:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.328 21:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.329 21:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:18.329 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:18.329 21:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.329 21:16:12 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.329 21:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.329 21:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.329 21:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.329 21:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:18.329 Found net devices under 0000:27:00.0: cvl_0_0 00:15:18.329 21:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.329 21:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.329 21:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.329 21:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:18.329 21:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.329 21:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:18.329 Found net devices under 0000:27:00.1: cvl_0_1 00:15:18.329 21:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.329 21:16:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:18.329 21:16:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:18.329 21:16:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:18.329 21:16:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:18.329 21:16:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.329 21:16:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.329 21:16:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.329 21:16:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.329 21:16:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.329 21:16:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.329 21:16:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.329 21:16:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.329 21:16:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.329 21:16:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.329 21:16:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.329 21:16:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.329 21:16:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.329 21:16:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.591 21:16:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.591 21:16:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.591 21:16:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.591 21:16:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.591 21:16:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.591 21:16:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:15:18.591 00:15:18.591 --- 10.0.0.2 ping statistics --- 00:15:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.591 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:15:18.591 21:16:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:15:18.591 00:15:18.591 --- 10.0.0.1 ping statistics --- 00:15:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.591 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:18.591 21:16:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.591 21:16:12 -- nvmf/common.sh@411 -- # return 0 00:15:18.591 21:16:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:18.591 21:16:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.591 21:16:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:18.591 21:16:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:18.591 21:16:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.591 21:16:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:18.591 21:16:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:18.591 21:16:12 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:18.591 21:16:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:18.591 21:16:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:18.591 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:18.591 21:16:12 -- nvmf/common.sh@470 -- # nvmfpid=1369018 00:15:18.591 21:16:12 -- nvmf/common.sh@471 -- # waitforlisten 1369018 00:15:18.591 21:16:12 -- common/autotest_common.sh@817 -- # '[' -z 1369018 ']' 00:15:18.591 21:16:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.591 21:16:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.591 21:16:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.591 21:16:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.591 21:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:18.591 21:16:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.591 [2024-04-23 21:16:12.858679] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:15:18.591 [2024-04-23 21:16:12.858787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.850 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.850 [2024-04-23 21:16:12.981918] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.850 [2024-04-23 21:16:13.081155] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.850 [2024-04-23 21:16:13.081191] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.850 [2024-04-23 21:16:13.081202] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.850 [2024-04-23 21:16:13.081211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.850 [2024-04-23 21:16:13.081219] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.850 [2024-04-23 21:16:13.081294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.850 [2024-04-23 21:16:13.081395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.850 [2024-04-23 21:16:13.081495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.850 [2024-04-23 21:16:13.081505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.417 21:16:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.417 21:16:13 -- common/autotest_common.sh@850 -- # return 0 00:15:19.417 21:16:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:19.417 21:16:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:19.417 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 21:16:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.418 21:16:13 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:19.418 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.418 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.418 21:16:13 -- target/rpc.sh@26 -- # stats='{ 00:15:19.418 "tick_rate": 1900000000, 00:15:19.418 "poll_groups": [ 00:15:19.418 { 00:15:19.418 "name": "nvmf_tgt_poll_group_0", 00:15:19.418 "admin_qpairs": 0, 00:15:19.418 "io_qpairs": 0, 00:15:19.418 "current_admin_qpairs": 0, 00:15:19.418 "current_io_qpairs": 0, 00:15:19.418 "pending_bdev_io": 0, 00:15:19.418 "completed_nvme_io": 0, 00:15:19.418 "transports": [] 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": "nvmf_tgt_poll_group_1", 00:15:19.418 "admin_qpairs": 0, 00:15:19.418 "io_qpairs": 0, 00:15:19.418 "current_admin_qpairs": 0, 00:15:19.418 "current_io_qpairs": 0, 00:15:19.418 "pending_bdev_io": 0, 00:15:19.418 "completed_nvme_io": 0, 00:15:19.418 "transports": [] 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": "nvmf_tgt_poll_group_2", 00:15:19.418 "admin_qpairs": 0, 00:15:19.418 "io_qpairs": 0, 00:15:19.418 "current_admin_qpairs": 0, 00:15:19.418 "current_io_qpairs": 0, 00:15:19.418 "pending_bdev_io": 0, 00:15:19.418 "completed_nvme_io": 0, 00:15:19.418 "transports": [] 00:15:19.418 }, 00:15:19.418 { 00:15:19.418 "name": "nvmf_tgt_poll_group_3", 00:15:19.418 "admin_qpairs": 0, 00:15:19.418 "io_qpairs": 0, 00:15:19.418 "current_admin_qpairs": 0, 00:15:19.418 "current_io_qpairs": 0, 00:15:19.418 "pending_bdev_io": 0, 00:15:19.418 "completed_nvme_io": 0, 00:15:19.418 "transports": [] 00:15:19.418 } 00:15:19.418 ] 00:15:19.418 }' 00:15:19.418 21:16:13 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:19.418 21:16:13 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:19.418 21:16:13 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:19.418 21:16:13 -- target/rpc.sh@15 -- # wc -l 00:15:19.418 21:16:13 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:19.418 21:16:13 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:19.418 21:16:13 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:19.418 21:16:13 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.418 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.418 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.418 [2024-04-23 21:16:13.679320] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.418 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.418 21:16:13 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:19.418 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.418 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@33 -- # stats='{ 00:15:19.679 "tick_rate": 1900000000, 00:15:19.679 "poll_groups": [ 00:15:19.679 { 00:15:19.679 "name": "nvmf_tgt_poll_group_0", 00:15:19.679 "admin_qpairs": 0, 00:15:19.679 "io_qpairs": 0, 00:15:19.679 "current_admin_qpairs": 0, 00:15:19.679 "current_io_qpairs": 0, 00:15:19.679 "pending_bdev_io": 0, 00:15:19.679 "completed_nvme_io": 0, 00:15:19.679 "transports": [ 00:15:19.679 { 00:15:19.679 "trtype": "TCP" 00:15:19.679 } 00:15:19.679 ] 00:15:19.679 }, 00:15:19.679 { 00:15:19.679 "name": "nvmf_tgt_poll_group_1", 00:15:19.679 "admin_qpairs": 0, 00:15:19.679 "io_qpairs": 0, 00:15:19.679 "current_admin_qpairs": 0, 00:15:19.679 "current_io_qpairs": 0, 00:15:19.679 "pending_bdev_io": 0, 00:15:19.679 "completed_nvme_io": 0, 00:15:19.679 "transports": [ 00:15:19.679 { 00:15:19.679 "trtype": "TCP" 00:15:19.679 } 00:15:19.679 ] 00:15:19.679 }, 00:15:19.679 { 00:15:19.679 "name": "nvmf_tgt_poll_group_2", 00:15:19.679 "admin_qpairs": 0, 00:15:19.679 "io_qpairs": 0, 00:15:19.679 "current_admin_qpairs": 0, 00:15:19.679 "current_io_qpairs": 0, 00:15:19.679 "pending_bdev_io": 0, 00:15:19.679 "completed_nvme_io": 0, 00:15:19.679 "transports": [ 00:15:19.679 { 00:15:19.679 "trtype": "TCP" 00:15:19.679 } 00:15:19.679 ] 00:15:19.679 }, 00:15:19.679 { 00:15:19.679 "name": "nvmf_tgt_poll_group_3", 00:15:19.679 "admin_qpairs": 0, 00:15:19.679 "io_qpairs": 0, 00:15:19.679 "current_admin_qpairs": 0, 00:15:19.679 "current_io_qpairs": 0, 00:15:19.679 "pending_bdev_io": 0, 00:15:19.679 "completed_nvme_io": 0, 00:15:19.679 "transports": [ 00:15:19.679 { 00:15:19.679 "trtype": "TCP" 00:15:19.679 } 00:15:19.679 ] 00:15:19.679 } 00:15:19.679 ] 00:15:19.679 }' 00:15:19.679 21:16:13 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:19.679 21:16:13 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:19.679 21:16:13 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:19.679 21:16:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:19.679 21:16:13 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:19.679 21:16:13 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:19.679 21:16:13 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:19.679 21:16:13 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:19.679 21:16:13 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 Malloc1 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 [2024-04-23 21:16:13.850509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:15:19.679 21:16:13 -- common/autotest_common.sh@638 -- # local es=0 00:15:19.679 21:16:13 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:15:19.679 21:16:13 -- common/autotest_common.sh@626 -- # local arg=nvme 00:15:19.679 21:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.679 21:16:13 -- common/autotest_common.sh@630 -- # type -t nvme 00:15:19.679 21:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.679 21:16:13 -- common/autotest_common.sh@632 -- # type -P nvme 00:15:19.679 21:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:19.679 21:16:13 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:15:19.679 21:16:13 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:15:19.679 21:16:13 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.2 -s 4420 00:15:19.679 [2024-04-23 21:16:13.879495] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:15:19.679 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:19.679 could not add new controller: failed to write to nvme-fabrics device 00:15:19.679 21:16:13 -- common/autotest_common.sh@641 -- # es=1 00:15:19.679 21:16:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:19.679 21:16:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:19.679 21:16:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:19.679 21:16:13 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:19.679 21:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.679 21:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 21:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.679 21:16:13 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:21.590 21:16:15 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:21.590 21:16:15 -- common/autotest_common.sh@1184 -- # local i=0 00:15:21.590 21:16:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.590 21:16:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:21.590 21:16:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:23.498 21:16:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:23.498 21:16:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:23.498 21:16:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.498 21:16:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:23.498 21:16:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.498 21:16:17 -- common/autotest_common.sh@1194 -- # return 0 00:15:23.498 21:16:17 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.499 21:16:17 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.499 21:16:17 -- common/autotest_common.sh@1205 -- # local i=0 00:15:23.499 21:16:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:23.499 21:16:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.499 21:16:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:23.499 21:16:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.499 21:16:17 -- common/autotest_common.sh@1217 -- # return 0 00:15:23.499 21:16:17 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:23.499 21:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.499 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:15:23.499 21:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.499 21:16:17 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.499 21:16:17 -- common/autotest_common.sh@638 -- # local es=0 00:15:23.499 21:16:17 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.499 21:16:17 -- common/autotest_common.sh@626 -- # local arg=nvme 00:15:23.499 21:16:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.499 21:16:17 -- common/autotest_common.sh@630 -- # type -t nvme 00:15:23.499 21:16:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.499 21:16:17 -- common/autotest_common.sh@632 -- # type -P nvme 00:15:23.499 21:16:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:23.499 21:16:17 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:15:23.499 21:16:17 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:15:23.499 21:16:17 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.499 [2024-04-23 21:16:17.608660] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3' 00:15:23.499 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:23.499 could not add new controller: failed to write to nvme-fabrics device 00:15:23.499 21:16:17 -- common/autotest_common.sh@641 -- # es=1 00:15:23.499 21:16:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:23.499 21:16:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:23.499 21:16:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:23.499 21:16:17 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:23.499 21:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.499 21:16:17 -- common/autotest_common.sh@10 -- # set +x 00:15:23.499 21:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.499 21:16:17 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.880 21:16:19 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:24.880 21:16:19 -- common/autotest_common.sh@1184 -- # local i=0 00:15:24.880 21:16:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.880 21:16:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:24.880 21:16:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:26.791 21:16:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:26.791 21:16:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:26.791 21:16:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.791 21:16:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:26.791 21:16:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.791 21:16:21 -- common/autotest_common.sh@1194 -- # return 0 00:15:26.791 21:16:21 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.051 21:16:21 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.051 21:16:21 -- common/autotest_common.sh@1205 -- # local i=0 00:15:27.051 21:16:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:27.051 21:16:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.051 21:16:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:27.051 21:16:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.051 21:16:21 -- common/autotest_common.sh@1217 -- # return 0 00:15:27.051 21:16:21 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.051 21:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.051 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:15:27.051 21:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.051 21:16:21 -- target/rpc.sh@81 -- # seq 1 5 00:15:27.051 21:16:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:27.051 21:16:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.051 21:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.051 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:15:27.051 21:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.051 21:16:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.051 21:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.051 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:15:27.051 [2024-04-23 21:16:21.313676] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.051 21:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.051 21:16:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:27.051 21:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.051 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:15:27.309 21:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.309 21:16:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.309 21:16:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.309 21:16:21 -- common/autotest_common.sh@10 -- # set +x 00:15:27.309 21:16:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.309 21:16:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:28.692 21:16:22 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:28.692 21:16:22 -- common/autotest_common.sh@1184 -- # local i=0 00:15:28.692 21:16:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.692 21:16:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:28.692 21:16:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:30.709 21:16:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:30.709 21:16:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:30.709 21:16:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.709 21:16:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:30.709 21:16:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.709 21:16:24 -- common/autotest_common.sh@1194 -- # return 0 00:15:30.709 21:16:24 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.967 21:16:24 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.967 21:16:24 -- common/autotest_common.sh@1205 -- # local i=0 00:15:30.967 21:16:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:30.967 21:16:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.967 21:16:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:30.967 21:16:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.967 21:16:25 -- common/autotest_common.sh@1217 -- # return 0 00:15:30.967 21:16:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:30.967 21:16:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 [2024-04-23 21:16:25.039997] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:30.967 21:16:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.967 21:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:30.967 21:16:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.967 21:16:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.350 21:16:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.350 21:16:26 -- common/autotest_common.sh@1184 -- # local i=0 00:15:32.350 21:16:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.350 21:16:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:32.350 21:16:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:34.891 21:16:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:34.891 21:16:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:34.891 21:16:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.891 21:16:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:34.891 21:16:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.891 21:16:28 -- common/autotest_common.sh@1194 -- # return 0 00:15:34.891 21:16:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.891 21:16:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.891 21:16:28 -- common/autotest_common.sh@1205 -- # local i=0 00:15:34.891 21:16:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:34.891 21:16:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.891 21:16:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:34.891 21:16:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.891 21:16:28 -- common/autotest_common.sh@1217 -- # return 0 00:15:34.891 21:16:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.891 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.891 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.891 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.891 21:16:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.891 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.891 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.891 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.891 21:16:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:34.891 21:16:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.891 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.891 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.891 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.891 21:16:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.891 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.891 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.891 [2024-04-23 21:16:28.751099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.891 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.891 21:16:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:34.892 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.892 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.892 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.892 21:16:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.892 21:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.892 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.892 21:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.892 21:16:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:36.273 21:16:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.273 21:16:30 -- common/autotest_common.sh@1184 -- # local i=0 00:15:36.273 21:16:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.273 21:16:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:36.273 21:16:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:38.181 21:16:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:38.181 21:16:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:38.181 21:16:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.181 21:16:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:38.181 21:16:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.181 21:16:32 -- common/autotest_common.sh@1194 -- # return 0 00:15:38.181 21:16:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.181 21:16:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.181 21:16:32 -- common/autotest_common.sh@1205 -- # local i=0 00:15:38.181 21:16:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:38.181 21:16:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.181 21:16:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:38.181 21:16:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.181 21:16:32 -- common/autotest_common.sh@1217 -- # return 0 00:15:38.181 21:16:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.181 21:16:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.181 21:16:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:38.181 21:16:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.181 21:16:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 [2024-04-23 21:16:32.439680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.181 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.181 21:16:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.181 21:16:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.181 21:16:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.181 21:16:32 -- common/autotest_common.sh@10 -- # set +x 00:15:38.442 21:16:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.442 21:16:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.830 21:16:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:39.830 21:16:33 -- common/autotest_common.sh@1184 -- # local i=0 00:15:39.830 21:16:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.830 21:16:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:39.830 21:16:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:41.740 21:16:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:41.740 21:16:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:41.740 21:16:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.740 21:16:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:41.740 21:16:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.740 21:16:35 -- common/autotest_common.sh@1194 -- # return 0 00:15:41.740 21:16:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.000 21:16:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.000 21:16:36 -- common/autotest_common.sh@1205 -- # local i=0 00:15:42.000 21:16:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:42.000 21:16:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.000 21:16:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:42.000 21:16:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.000 21:16:36 -- common/autotest_common.sh@1217 -- # return 0 00:15:42.000 21:16:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:42.000 21:16:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 [2024-04-23 21:16:36.182017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:42.000 21:16:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.000 21:16:36 -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 21:16:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.000 21:16:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.379 21:16:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.379 21:16:37 -- common/autotest_common.sh@1184 -- # local i=0 00:15:43.379 21:16:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.379 21:16:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:43.379 21:16:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:45.922 21:16:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:45.922 21:16:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:45.922 21:16:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:45.922 21:16:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.922 21:16:39 -- common/autotest_common.sh@1194 -- # return 0 00:15:45.922 21:16:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.922 21:16:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@1205 -- # local i=0 00:15:45.922 21:16:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:45.922 21:16:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:45.922 21:16:39 -- common/autotest_common.sh@1217 -- # return 0 00:15:45.922 21:16:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@99 -- # seq 1 5 00:15:45.922 21:16:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.922 21:16:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 [2024-04-23 21:16:39.884806] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.922 21:16:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 [2024-04-23 21:16:39.932759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.922 21:16:39 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 [2024-04-23 21:16:39.980821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:39 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.922 21:16:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 [2024-04-23 21:16:40.028892] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.922 21:16:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.922 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.922 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.922 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:45.923 21:16:40 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 [2024-04-23 21:16:40.076949] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:45.923 21:16:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.923 21:16:40 -- common/autotest_common.sh@10 -- # set +x 00:15:45.923 21:16:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.923 21:16:40 -- target/rpc.sh@110 -- # stats='{ 00:15:45.923 "tick_rate": 1900000000, 00:15:45.923 "poll_groups": [ 00:15:45.923 { 00:15:45.923 "name": "nvmf_tgt_poll_group_0", 00:15:45.923 "admin_qpairs": 0, 00:15:45.923 "io_qpairs": 224, 00:15:45.923 "current_admin_qpairs": 0, 00:15:45.923 "current_io_qpairs": 0, 00:15:45.923 "pending_bdev_io": 0, 00:15:45.923 "completed_nvme_io": 227, 00:15:45.923 "transports": [ 00:15:45.923 { 00:15:45.923 "trtype": "TCP" 00:15:45.923 } 00:15:45.923 ] 00:15:45.923 }, 00:15:45.923 { 00:15:45.923 "name": "nvmf_tgt_poll_group_1", 00:15:45.923 "admin_qpairs": 1, 00:15:45.923 "io_qpairs": 223, 00:15:45.923 "current_admin_qpairs": 0, 00:15:45.923 "current_io_qpairs": 0, 00:15:45.923 "pending_bdev_io": 0, 00:15:45.923 "completed_nvme_io": 310, 00:15:45.923 "transports": [ 00:15:45.923 { 00:15:45.923 "trtype": "TCP" 00:15:45.923 } 00:15:45.923 ] 00:15:45.923 }, 00:15:45.923 { 00:15:45.923 "name": "nvmf_tgt_poll_group_2", 00:15:45.923 "admin_qpairs": 6, 00:15:45.923 "io_qpairs": 218, 00:15:45.923 "current_admin_qpairs": 0, 00:15:45.923 "current_io_qpairs": 0, 00:15:45.923 "pending_bdev_io": 0, 00:15:45.923 "completed_nvme_io": 270, 00:15:45.923 "transports": [ 00:15:45.923 { 00:15:45.923 "trtype": "TCP" 00:15:45.923 } 00:15:45.923 ] 00:15:45.923 }, 00:15:45.923 { 00:15:45.923 "name": "nvmf_tgt_poll_group_3", 00:15:45.923 "admin_qpairs": 0, 00:15:45.923 "io_qpairs": 224, 00:15:45.923 "current_admin_qpairs": 0, 00:15:45.923 "current_io_qpairs": 0, 00:15:45.923 "pending_bdev_io": 0, 00:15:45.923 "completed_nvme_io": 432, 00:15:45.923 "transports": [ 00:15:45.923 { 00:15:45.923 "trtype": "TCP" 00:15:45.923 } 00:15:45.923 ] 00:15:45.923 } 00:15:45.923 ] 00:15:45.923 }' 00:15:45.923 21:16:40 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:45.923 21:16:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:45.923 21:16:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:45.923 21:16:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:45.923 21:16:40 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:45.923 21:16:40 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:45.923 21:16:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:45.923 21:16:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:45.923 21:16:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:46.183 21:16:40 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:46.183 21:16:40 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:46.183 21:16:40 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:46.183 21:16:40 -- target/rpc.sh@123 -- # nvmftestfini 00:15:46.183 21:16:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:46.183 21:16:40 -- nvmf/common.sh@117 -- # sync 00:15:46.183 21:16:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.183 21:16:40 -- nvmf/common.sh@120 -- # set +e 00:15:46.183 21:16:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.183 21:16:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.183 rmmod nvme_tcp 00:15:46.183 rmmod nvme_fabrics 00:15:46.183 rmmod nvme_keyring 00:15:46.183 21:16:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.183 21:16:40 -- nvmf/common.sh@124 -- # set -e 00:15:46.183 21:16:40 -- nvmf/common.sh@125 -- # return 0 00:15:46.183 21:16:40 -- nvmf/common.sh@478 -- # '[' -n 1369018 ']' 00:15:46.183 21:16:40 -- nvmf/common.sh@479 -- # killprocess 1369018 00:15:46.183 21:16:40 -- common/autotest_common.sh@936 -- # '[' -z 1369018 ']' 00:15:46.183 21:16:40 -- common/autotest_common.sh@940 -- # kill -0 1369018 00:15:46.183 21:16:40 -- common/autotest_common.sh@941 -- # uname 00:15:46.183 21:16:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.183 21:16:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1369018 00:15:46.183 21:16:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:46.183 21:16:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:46.183 21:16:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1369018' 00:15:46.183 killing process with pid 1369018 00:15:46.183 21:16:40 -- common/autotest_common.sh@955 -- # kill 1369018 00:15:46.183 21:16:40 -- common/autotest_common.sh@960 -- # wait 1369018 00:15:46.751 21:16:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:46.751 21:16:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:46.751 21:16:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:46.751 21:16:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.751 21:16:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.751 21:16:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.751 21:16:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.751 21:16:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.666 21:16:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.666 00:15:48.666 real 0m35.803s 00:15:48.666 user 1m51.685s 00:15:48.666 sys 0m5.484s 00:15:48.666 21:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.666 21:16:42 -- common/autotest_common.sh@10 -- # set +x 00:15:48.666 ************************************ 00:15:48.666 END TEST nvmf_rpc 00:15:48.666 ************************************ 00:15:48.666 21:16:42 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:48.666 21:16:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.666 21:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.666 21:16:42 -- common/autotest_common.sh@10 -- # set +x 00:15:48.926 ************************************ 00:15:48.926 START TEST nvmf_invalid 00:15:48.926 ************************************ 00:15:48.926 21:16:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:48.926 * Looking for test storage... 00:15:48.926 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:48.926 21:16:43 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.926 21:16:43 -- nvmf/common.sh@7 -- # uname -s 00:15:48.926 21:16:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.926 21:16:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.926 21:16:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.926 21:16:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.926 21:16:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.926 21:16:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.926 21:16:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.926 21:16:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.926 21:16:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.926 21:16:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.926 21:16:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:48.926 21:16:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:15:48.926 21:16:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.926 21:16:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.927 21:16:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:48.927 21:16:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.927 21:16:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:48.927 21:16:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.927 21:16:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.927 21:16:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.927 21:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.927 21:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.927 21:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.927 21:16:43 -- paths/export.sh@5 -- # export PATH 00:15:48.927 21:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.927 21:16:43 -- nvmf/common.sh@47 -- # : 0 00:15:48.927 21:16:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.927 21:16:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.927 21:16:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.927 21:16:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.927 21:16:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.927 21:16:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.927 21:16:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.927 21:16:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.927 21:16:43 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:48.927 21:16:43 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:48.927 21:16:43 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:48.927 21:16:43 -- target/invalid.sh@14 -- # target=foobar 00:15:48.927 21:16:43 -- target/invalid.sh@16 -- # RANDOM=0 00:15:48.927 21:16:43 -- target/invalid.sh@34 -- # nvmftestinit 00:15:48.927 21:16:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:48.927 21:16:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.927 21:16:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:48.927 21:16:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:48.927 21:16:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:48.927 21:16:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.927 21:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.927 21:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.927 21:16:43 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:48.927 21:16:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:48.927 21:16:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.927 21:16:43 -- common/autotest_common.sh@10 -- # set +x 00:15:55.506 21:16:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:55.506 21:16:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.506 21:16:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.506 21:16:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.506 21:16:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.506 21:16:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.506 21:16:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.506 21:16:49 -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.506 21:16:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.506 21:16:49 -- nvmf/common.sh@296 -- # e810=() 00:15:55.506 21:16:49 -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.506 21:16:49 -- nvmf/common.sh@297 -- # x722=() 00:15:55.506 21:16:49 -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.506 21:16:49 -- nvmf/common.sh@298 -- # mlx=() 00:15:55.506 21:16:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.506 21:16:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.506 21:16:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.506 21:16:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.506 21:16:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.506 21:16:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.506 21:16:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.507 21:16:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.507 21:16:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.507 21:16:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:55.507 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:55.507 21:16:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.507 21:16:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:55.507 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:55.507 21:16:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.507 21:16:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.507 21:16:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.507 21:16:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:55.507 Found net devices under 0000:27:00.0: cvl_0_0 00:15:55.507 21:16:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.507 21:16:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.507 21:16:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.507 21:16:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.507 21:16:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:55.507 Found net devices under 0000:27:00.1: cvl_0_1 00:15:55.507 21:16:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.507 21:16:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:55.507 21:16:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:55.507 21:16:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.507 21:16:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.507 21:16:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.507 21:16:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.507 21:16:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.507 21:16:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.507 21:16:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.507 21:16:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.507 21:16:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.507 21:16:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.507 21:16:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.507 21:16:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.507 21:16:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.507 21:16:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.507 21:16:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.507 21:16:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.507 21:16:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.507 21:16:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.507 21:16:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.507 21:16:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:15:55.507 00:15:55.507 --- 10.0.0.2 ping statistics --- 00:15:55.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.507 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:15:55.507 21:16:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:15:55.507 00:15:55.507 --- 10.0.0.1 ping statistics --- 00:15:55.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.507 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:15:55.507 21:16:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.507 21:16:49 -- nvmf/common.sh@411 -- # return 0 00:15:55.507 21:16:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:55.507 21:16:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.507 21:16:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:55.507 21:16:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.507 21:16:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:55.507 21:16:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:55.507 21:16:49 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:55.507 21:16:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:55.507 21:16:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:55.507 21:16:49 -- common/autotest_common.sh@10 -- # set +x 00:15:55.507 21:16:49 -- nvmf/common.sh@470 -- # nvmfpid=1378678 00:15:55.507 21:16:49 -- nvmf/common.sh@471 -- # waitforlisten 1378678 00:15:55.507 21:16:49 -- common/autotest_common.sh@817 -- # '[' -z 1378678 ']' 00:15:55.507 21:16:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.507 21:16:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.507 21:16:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.507 21:16:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.507 21:16:49 -- common/autotest_common.sh@10 -- # set +x 00:15:55.507 21:16:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.769 [2024-04-23 21:16:49.820072] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:15:55.769 [2024-04-23 21:16:49.820207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.769 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.769 [2024-04-23 21:16:49.963253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.030 [2024-04-23 21:16:50.076477] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.030 [2024-04-23 21:16:50.076532] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.030 [2024-04-23 21:16:50.076546] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.030 [2024-04-23 21:16:50.076556] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.030 [2024-04-23 21:16:50.076565] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.030 [2024-04-23 21:16:50.076656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.030 [2024-04-23 21:16:50.076681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.031 [2024-04-23 21:16:50.076784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.031 [2024-04-23 21:16:50.076794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.291 21:16:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.291 21:16:50 -- common/autotest_common.sh@850 -- # return 0 00:15:56.291 21:16:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:56.291 21:16:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:56.291 21:16:50 -- common/autotest_common.sh@10 -- # set +x 00:15:56.552 21:16:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.552 21:16:50 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:56.552 21:16:50 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6809 00:15:56.552 [2024-04-23 21:16:50.712163] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:56.552 21:16:50 -- target/invalid.sh@40 -- # out='request: 00:15:56.552 { 00:15:56.552 "nqn": "nqn.2016-06.io.spdk:cnode6809", 00:15:56.552 "tgt_name": "foobar", 00:15:56.552 "method": "nvmf_create_subsystem", 00:15:56.552 "req_id": 1 00:15:56.552 } 00:15:56.552 Got JSON-RPC error response 00:15:56.552 response: 00:15:56.552 { 00:15:56.552 "code": -32603, 00:15:56.552 "message": "Unable to find target foobar" 00:15:56.552 }' 00:15:56.552 21:16:50 -- target/invalid.sh@41 -- # [[ request: 00:15:56.552 { 00:15:56.552 "nqn": "nqn.2016-06.io.spdk:cnode6809", 00:15:56.552 "tgt_name": "foobar", 00:15:56.552 "method": "nvmf_create_subsystem", 00:15:56.552 "req_id": 1 00:15:56.552 } 00:15:56.552 Got JSON-RPC error response 00:15:56.552 response: 00:15:56.552 { 00:15:56.552 "code": -32603, 00:15:56.552 "message": "Unable to find target foobar" 00:15:56.552 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:56.552 21:16:50 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:56.552 21:16:50 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20796 00:15:56.812 [2024-04-23 21:16:50.860404] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20796: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:56.812 21:16:50 -- target/invalid.sh@45 -- # out='request: 00:15:56.812 { 00:15:56.812 "nqn": "nqn.2016-06.io.spdk:cnode20796", 00:15:56.812 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:56.812 "method": "nvmf_create_subsystem", 00:15:56.812 "req_id": 1 00:15:56.812 } 00:15:56.812 Got JSON-RPC error response 00:15:56.812 response: 00:15:56.812 { 00:15:56.812 "code": -32602, 00:15:56.812 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:56.812 }' 00:15:56.812 21:16:50 -- target/invalid.sh@46 -- # [[ request: 00:15:56.812 { 00:15:56.812 "nqn": "nqn.2016-06.io.spdk:cnode20796", 00:15:56.812 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:56.812 "method": "nvmf_create_subsystem", 00:15:56.812 "req_id": 1 00:15:56.812 } 00:15:56.812 Got JSON-RPC error response 00:15:56.812 response: 00:15:56.812 { 00:15:56.812 "code": -32602, 00:15:56.812 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:56.812 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:56.812 21:16:50 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:56.812 21:16:50 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9485 00:15:56.812 [2024-04-23 21:16:51.016555] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9485: invalid model number 'SPDK_Controller' 00:15:56.812 21:16:51 -- target/invalid.sh@50 -- # out='request: 00:15:56.812 { 00:15:56.812 "nqn": "nqn.2016-06.io.spdk:cnode9485", 00:15:56.812 "model_number": "SPDK_Controller\u001f", 00:15:56.812 "method": "nvmf_create_subsystem", 00:15:56.812 "req_id": 1 00:15:56.812 } 00:15:56.812 Got JSON-RPC error response 00:15:56.812 response: 00:15:56.812 { 00:15:56.812 "code": -32602, 00:15:56.812 "message": "Invalid MN SPDK_Controller\u001f" 00:15:56.812 }' 00:15:56.812 21:16:51 -- target/invalid.sh@51 -- # [[ request: 00:15:56.812 { 00:15:56.812 "nqn": "nqn.2016-06.io.spdk:cnode9485", 00:15:56.812 "model_number": "SPDK_Controller\u001f", 00:15:56.812 "method": "nvmf_create_subsystem", 00:15:56.812 "req_id": 1 00:15:56.812 } 00:15:56.812 Got JSON-RPC error response 00:15:56.812 response: 00:15:56.812 { 00:15:56.812 "code": -32602, 00:15:56.812 "message": "Invalid MN SPDK_Controller\u001f" 00:15:56.812 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:56.812 21:16:51 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:56.812 21:16:51 -- target/invalid.sh@19 -- # local length=21 ll 00:15:56.812 21:16:51 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:56.812 21:16:51 -- target/invalid.sh@21 -- # local chars 00:15:56.812 21:16:51 -- target/invalid.sh@22 -- # local string 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 97 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # string+=a 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 115 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # string+=s 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 114 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # string+=r 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 89 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # string+=Y 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 62 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # string+='>' 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:56.812 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:56.812 21:16:51 -- target/invalid.sh@25 -- # printf %x 52 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+=4 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 69 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+=E 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 93 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+=']' 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 34 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+='"' 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 110 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+=n 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 48 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # string+=0 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.074 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # printf %x 48 00:15:57.074 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=0 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 100 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=d 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 48 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=0 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 47 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=/ 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 65 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=A 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 83 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=S 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 60 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+='<' 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 92 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+='\' 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 65 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=A 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # printf %x 59 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:57.075 21:16:51 -- target/invalid.sh@25 -- # string+=';' 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.075 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.075 21:16:51 -- target/invalid.sh@28 -- # [[ a == \- ]] 00:15:57.075 21:16:51 -- target/invalid.sh@31 -- # echo 'asrY>4E]"n00d0/AS<\A;' 00:15:57.075 21:16:51 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'asrY>4E]"n00d0/AS<\A;' nqn.2016-06.io.spdk:cnode5890 00:15:57.075 [2024-04-23 21:16:51.328964] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5890: invalid serial number 'asrY>4E]"n00d0/AS<\A;' 00:15:57.334 21:16:51 -- target/invalid.sh@54 -- # out='request: 00:15:57.334 { 00:15:57.334 "nqn": "nqn.2016-06.io.spdk:cnode5890", 00:15:57.334 "serial_number": "asrY>4E]\"n00d0/AS<\\A;", 00:15:57.334 "method": "nvmf_create_subsystem", 00:15:57.334 "req_id": 1 00:15:57.334 } 00:15:57.334 Got JSON-RPC error response 00:15:57.334 response: 00:15:57.334 { 00:15:57.334 "code": -32602, 00:15:57.334 "message": "Invalid SN asrY>4E]\"n00d0/AS<\\A;" 00:15:57.334 }' 00:15:57.334 21:16:51 -- target/invalid.sh@55 -- # [[ request: 00:15:57.334 { 00:15:57.334 "nqn": "nqn.2016-06.io.spdk:cnode5890", 00:15:57.334 "serial_number": "asrY>4E]\"n00d0/AS<\\A;", 00:15:57.334 "method": "nvmf_create_subsystem", 00:15:57.334 "req_id": 1 00:15:57.334 } 00:15:57.334 Got JSON-RPC error response 00:15:57.334 response: 00:15:57.334 { 00:15:57.334 "code": -32602, 00:15:57.334 "message": "Invalid SN asrY>4E]\"n00d0/AS<\\A;" 00:15:57.334 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:57.334 21:16:51 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:57.334 21:16:51 -- target/invalid.sh@19 -- # local length=41 ll 00:15:57.334 21:16:51 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:57.334 21:16:51 -- target/invalid.sh@21 -- # local chars 00:15:57.334 21:16:51 -- target/invalid.sh@22 -- # local string 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # printf %x 62 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # string+='>' 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # printf %x 91 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # string+='[' 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # printf %x 66 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # string+=B 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.334 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.334 21:16:51 -- target/invalid.sh@25 -- # printf %x 79 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=O 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 79 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=O 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 96 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='`' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 73 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=I 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 77 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=M 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 35 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='#' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 79 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=O 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 107 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=k 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 99 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=c 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 71 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=G 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 124 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='|' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 57 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=9 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 59 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=';' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 60 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='<' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 90 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=Z 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 63 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='?' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 110 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=n 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 123 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='{' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 74 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=J 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 49 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=1 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 85 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=U 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 62 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='>' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 66 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=B 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 79 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=O 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 40 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='(' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 92 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='\' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 68 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=D 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 111 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=o 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 91 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='[' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 48 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=0 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 49 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=1 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 45 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=- 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 70 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+=F 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 63 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='?' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # printf %x 40 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:57.335 21:16:51 -- target/invalid.sh@25 -- # string+='(' 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.335 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # printf %x 40 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # string+='(' 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # printf %x 61 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # string+== 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # printf %x 33 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:57.596 21:16:51 -- target/invalid.sh@25 -- # string+='!' 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:57.596 21:16:51 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:57.596 21:16:51 -- target/invalid.sh@28 -- # [[ > == \- ]] 00:15:57.596 21:16:51 -- target/invalid.sh@31 -- # echo '>[BOO`IM#OkcG|9;BO(\Do[01-F?((=!' 00:15:57.596 21:16:51 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>[BOO`IM#OkcG|9;BO(\Do[01-F?((=!' nqn.2016-06.io.spdk:cnode21838 00:15:57.596 [2024-04-23 21:16:51.741464] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21838: invalid model number '>[BOO`IM#OkcG|9;BO(\Do[01-F?((=!' 00:15:57.596 21:16:51 -- target/invalid.sh@58 -- # out='request: 00:15:57.596 { 00:15:57.596 "nqn": "nqn.2016-06.io.spdk:cnode21838", 00:15:57.596 "model_number": ">[BOO`IM#OkcG|9;BO(\\Do[01-F?((=!", 00:15:57.596 "method": "nvmf_create_subsystem", 00:15:57.596 "req_id": 1 00:15:57.596 } 00:15:57.596 Got JSON-RPC error response 00:15:57.596 response: 00:15:57.596 { 00:15:57.596 "code": -32602, 00:15:57.596 "message": "Invalid MN >[BOO`IM#OkcG|9;BO(\\Do[01-F?((=!" 00:15:57.596 }' 00:15:57.596 21:16:51 -- target/invalid.sh@59 -- # [[ request: 00:15:57.596 { 00:15:57.596 "nqn": "nqn.2016-06.io.spdk:cnode21838", 00:15:57.596 "model_number": ">[BOO`IM#OkcG|9;BO(\\Do[01-F?((=!", 00:15:57.596 "method": "nvmf_create_subsystem", 00:15:57.596 "req_id": 1 00:15:57.596 } 00:15:57.596 Got JSON-RPC error response 00:15:57.596 response: 00:15:57.596 { 00:15:57.596 "code": -32602, 00:15:57.596 "message": "Invalid MN >[BOO`IM#OkcG|9;BO(\\Do[01-F?((=!" 00:15:57.596 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:57.596 21:16:51 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:57.857 [2024-04-23 21:16:51.885696] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.858 21:16:51 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:57.858 21:16:52 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:57.858 21:16:52 -- target/invalid.sh@67 -- # head -n 1 00:15:57.858 21:16:52 -- target/invalid.sh@67 -- # echo '' 00:15:57.858 21:16:52 -- target/invalid.sh@67 -- # IP= 00:15:57.858 21:16:52 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:58.117 [2024-04-23 21:16:52.210103] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:58.117 21:16:52 -- target/invalid.sh@69 -- # out='request: 00:15:58.117 { 00:15:58.117 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:58.117 "listen_address": { 00:15:58.117 "trtype": "tcp", 00:15:58.117 "traddr": "", 00:15:58.117 "trsvcid": "4421" 00:15:58.117 }, 00:15:58.117 "method": "nvmf_subsystem_remove_listener", 00:15:58.117 "req_id": 1 00:15:58.117 } 00:15:58.117 Got JSON-RPC error response 00:15:58.117 response: 00:15:58.117 { 00:15:58.117 "code": -32602, 00:15:58.117 "message": "Invalid parameters" 00:15:58.117 }' 00:15:58.117 21:16:52 -- target/invalid.sh@70 -- # [[ request: 00:15:58.117 { 00:15:58.117 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:58.117 "listen_address": { 00:15:58.117 "trtype": "tcp", 00:15:58.117 "traddr": "", 00:15:58.117 "trsvcid": "4421" 00:15:58.117 }, 00:15:58.117 "method": "nvmf_subsystem_remove_listener", 00:15:58.117 "req_id": 1 00:15:58.117 } 00:15:58.117 Got JSON-RPC error response 00:15:58.117 response: 00:15:58.117 { 00:15:58.117 "code": -32602, 00:15:58.117 "message": "Invalid parameters" 00:15:58.117 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:58.117 21:16:52 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21730 -i 0 00:15:58.117 [2024-04-23 21:16:52.366250] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21730: invalid cntlid range [0-65519] 00:15:58.378 21:16:52 -- target/invalid.sh@73 -- # out='request: 00:15:58.378 { 00:15:58.378 "nqn": "nqn.2016-06.io.spdk:cnode21730", 00:15:58.378 "min_cntlid": 0, 00:15:58.378 "method": "nvmf_create_subsystem", 00:15:58.378 "req_id": 1 00:15:58.378 } 00:15:58.378 Got JSON-RPC error response 00:15:58.378 response: 00:15:58.378 { 00:15:58.378 "code": -32602, 00:15:58.378 "message": "Invalid cntlid range [0-65519]" 00:15:58.378 }' 00:15:58.378 21:16:52 -- target/invalid.sh@74 -- # [[ request: 00:15:58.378 { 00:15:58.378 "nqn": "nqn.2016-06.io.spdk:cnode21730", 00:15:58.378 "min_cntlid": 0, 00:15:58.378 "method": "nvmf_create_subsystem", 00:15:58.378 "req_id": 1 00:15:58.378 } 00:15:58.378 Got JSON-RPC error response 00:15:58.378 response: 00:15:58.378 { 00:15:58.378 "code": -32602, 00:15:58.378 "message": "Invalid cntlid range [0-65519]" 00:15:58.378 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:58.378 21:16:52 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29315 -i 65520 00:15:58.378 [2024-04-23 21:16:52.526464] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29315: invalid cntlid range [65520-65519] 00:15:58.378 21:16:52 -- target/invalid.sh@75 -- # out='request: 00:15:58.378 { 00:15:58.378 "nqn": "nqn.2016-06.io.spdk:cnode29315", 00:15:58.378 "min_cntlid": 65520, 00:15:58.378 "method": "nvmf_create_subsystem", 00:15:58.378 "req_id": 1 00:15:58.378 } 00:15:58.378 Got JSON-RPC error response 00:15:58.378 response: 00:15:58.378 { 00:15:58.378 "code": -32602, 00:15:58.378 "message": "Invalid cntlid range [65520-65519]" 00:15:58.378 }' 00:15:58.378 21:16:52 -- target/invalid.sh@76 -- # [[ request: 00:15:58.378 { 00:15:58.378 "nqn": "nqn.2016-06.io.spdk:cnode29315", 00:15:58.378 "min_cntlid": 65520, 00:15:58.378 "method": "nvmf_create_subsystem", 00:15:58.378 "req_id": 1 00:15:58.378 } 00:15:58.378 Got JSON-RPC error response 00:15:58.378 response: 00:15:58.378 { 00:15:58.378 "code": -32602, 00:15:58.378 "message": "Invalid cntlid range [65520-65519]" 00:15:58.378 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:58.378 21:16:52 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15360 -I 0 00:15:58.638 [2024-04-23 21:16:52.686692] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15360: invalid cntlid range [1-0] 00:15:58.638 21:16:52 -- target/invalid.sh@77 -- # out='request: 00:15:58.638 { 00:15:58.638 "nqn": "nqn.2016-06.io.spdk:cnode15360", 00:15:58.638 "max_cntlid": 0, 00:15:58.638 "method": "nvmf_create_subsystem", 00:15:58.638 "req_id": 1 00:15:58.638 } 00:15:58.638 Got JSON-RPC error response 00:15:58.638 response: 00:15:58.638 { 00:15:58.638 "code": -32602, 00:15:58.638 "message": "Invalid cntlid range [1-0]" 00:15:58.638 }' 00:15:58.638 21:16:52 -- target/invalid.sh@78 -- # [[ request: 00:15:58.638 { 00:15:58.638 "nqn": "nqn.2016-06.io.spdk:cnode15360", 00:15:58.638 "max_cntlid": 0, 00:15:58.638 "method": "nvmf_create_subsystem", 00:15:58.638 "req_id": 1 00:15:58.638 } 00:15:58.638 Got JSON-RPC error response 00:15:58.638 response: 00:15:58.638 { 00:15:58.638 "code": -32602, 00:15:58.638 "message": "Invalid cntlid range [1-0]" 00:15:58.638 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:58.638 21:16:52 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23467 -I 65520 00:15:58.638 [2024-04-23 21:16:52.854875] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23467: invalid cntlid range [1-65520] 00:15:58.638 21:16:52 -- target/invalid.sh@79 -- # out='request: 00:15:58.638 { 00:15:58.638 "nqn": "nqn.2016-06.io.spdk:cnode23467", 00:15:58.638 "max_cntlid": 65520, 00:15:58.638 "method": "nvmf_create_subsystem", 00:15:58.638 "req_id": 1 00:15:58.638 } 00:15:58.638 Got JSON-RPC error response 00:15:58.638 response: 00:15:58.638 { 00:15:58.638 "code": -32602, 00:15:58.638 "message": "Invalid cntlid range [1-65520]" 00:15:58.638 }' 00:15:58.638 21:16:52 -- target/invalid.sh@80 -- # [[ request: 00:15:58.638 { 00:15:58.638 "nqn": "nqn.2016-06.io.spdk:cnode23467", 00:15:58.638 "max_cntlid": 65520, 00:15:58.638 "method": "nvmf_create_subsystem", 00:15:58.638 "req_id": 1 00:15:58.638 } 00:15:58.638 Got JSON-RPC error response 00:15:58.638 response: 00:15:58.638 { 00:15:58.638 "code": -32602, 00:15:58.638 "message": "Invalid cntlid range [1-65520]" 00:15:58.638 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:58.638 21:16:52 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16789 -i 6 -I 5 00:15:58.895 [2024-04-23 21:16:52.999067] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16789: invalid cntlid range [6-5] 00:15:58.895 21:16:53 -- target/invalid.sh@83 -- # out='request: 00:15:58.895 { 00:15:58.895 "nqn": "nqn.2016-06.io.spdk:cnode16789", 00:15:58.895 "min_cntlid": 6, 00:15:58.895 "max_cntlid": 5, 00:15:58.895 "method": "nvmf_create_subsystem", 00:15:58.895 "req_id": 1 00:15:58.895 } 00:15:58.895 Got JSON-RPC error response 00:15:58.895 response: 00:15:58.895 { 00:15:58.895 "code": -32602, 00:15:58.895 "message": "Invalid cntlid range [6-5]" 00:15:58.895 }' 00:15:58.895 21:16:53 -- target/invalid.sh@84 -- # [[ request: 00:15:58.895 { 00:15:58.895 "nqn": "nqn.2016-06.io.spdk:cnode16789", 00:15:58.895 "min_cntlid": 6, 00:15:58.895 "max_cntlid": 5, 00:15:58.895 "method": "nvmf_create_subsystem", 00:15:58.895 "req_id": 1 00:15:58.895 } 00:15:58.895 Got JSON-RPC error response 00:15:58.895 response: 00:15:58.895 { 00:15:58.895 "code": -32602, 00:15:58.895 "message": "Invalid cntlid range [6-5]" 00:15:58.895 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:58.895 21:16:53 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:58.895 21:16:53 -- target/invalid.sh@87 -- # out='request: 00:15:58.895 { 00:15:58.895 "name": "foobar", 00:15:58.895 "method": "nvmf_delete_target", 00:15:58.895 "req_id": 1 00:15:58.895 } 00:15:58.895 Got JSON-RPC error response 00:15:58.895 response: 00:15:58.895 { 00:15:58.895 "code": -32602, 00:15:58.895 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:58.895 }' 00:15:58.895 21:16:53 -- target/invalid.sh@88 -- # [[ request: 00:15:58.895 { 00:15:58.895 "name": "foobar", 00:15:58.895 "method": "nvmf_delete_target", 00:15:58.895 "req_id": 1 00:15:58.895 } 00:15:58.896 Got JSON-RPC error response 00:15:58.896 response: 00:15:58.896 { 00:15:58.896 "code": -32602, 00:15:58.896 "message": "The specified target doesn't exist, cannot delete it." 00:15:58.896 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:58.896 21:16:53 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:58.896 21:16:53 -- target/invalid.sh@91 -- # nvmftestfini 00:15:58.896 21:16:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:58.896 21:16:53 -- nvmf/common.sh@117 -- # sync 00:15:58.896 21:16:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.896 21:16:53 -- nvmf/common.sh@120 -- # set +e 00:15:58.896 21:16:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.896 21:16:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.896 rmmod nvme_tcp 00:15:58.896 rmmod nvme_fabrics 00:15:58.896 rmmod nvme_keyring 00:15:58.896 21:16:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.155 21:16:53 -- nvmf/common.sh@124 -- # set -e 00:15:59.155 21:16:53 -- nvmf/common.sh@125 -- # return 0 00:15:59.155 21:16:53 -- nvmf/common.sh@478 -- # '[' -n 1378678 ']' 00:15:59.155 21:16:53 -- nvmf/common.sh@479 -- # killprocess 1378678 00:15:59.155 21:16:53 -- common/autotest_common.sh@936 -- # '[' -z 1378678 ']' 00:15:59.155 21:16:53 -- common/autotest_common.sh@940 -- # kill -0 1378678 00:15:59.155 21:16:53 -- common/autotest_common.sh@941 -- # uname 00:15:59.155 21:16:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.155 21:16:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1378678 00:15:59.155 21:16:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.155 21:16:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.155 21:16:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1378678' 00:15:59.155 killing process with pid 1378678 00:15:59.155 21:16:53 -- common/autotest_common.sh@955 -- # kill 1378678 00:15:59.155 21:16:53 -- common/autotest_common.sh@960 -- # wait 1378678 00:15:59.417 21:16:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:59.417 21:16:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:59.417 21:16:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:59.417 21:16:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.417 21:16:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.417 21:16:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.417 21:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.417 21:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.957 21:16:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.957 00:16:01.957 real 0m12.698s 00:16:01.957 user 0m17.581s 00:16:01.957 sys 0m5.938s 00:16:01.957 21:16:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.957 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:16:01.957 ************************************ 00:16:01.957 END TEST nvmf_invalid 00:16:01.957 ************************************ 00:16:01.957 21:16:55 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:01.957 21:16:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.957 21:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.957 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:16:01.957 ************************************ 00:16:01.957 START TEST nvmf_abort 00:16:01.957 ************************************ 00:16:01.957 21:16:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:01.957 * Looking for test storage... 00:16:01.957 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:01.957 21:16:55 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.957 21:16:55 -- nvmf/common.sh@7 -- # uname -s 00:16:01.957 21:16:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.957 21:16:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.957 21:16:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.957 21:16:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.957 21:16:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.957 21:16:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.957 21:16:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.957 21:16:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.957 21:16:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.957 21:16:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.957 21:16:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:01.957 21:16:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:01.957 21:16:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.957 21:16:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.957 21:16:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:01.957 21:16:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.957 21:16:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:01.957 21:16:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.957 21:16:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.957 21:16:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.957 21:16:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.957 21:16:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.957 21:16:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.957 21:16:55 -- paths/export.sh@5 -- # export PATH 00:16:01.957 21:16:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.957 21:16:55 -- nvmf/common.sh@47 -- # : 0 00:16:01.957 21:16:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.957 21:16:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.957 21:16:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.957 21:16:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.957 21:16:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.957 21:16:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.957 21:16:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.957 21:16:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.957 21:16:55 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:01.957 21:16:55 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:01.957 21:16:55 -- target/abort.sh@14 -- # nvmftestinit 00:16:01.957 21:16:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:01.957 21:16:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.957 21:16:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:01.957 21:16:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:01.957 21:16:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:01.957 21:16:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.957 21:16:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.957 21:16:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.957 21:16:55 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:01.957 21:16:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:01.957 21:16:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.957 21:16:55 -- common/autotest_common.sh@10 -- # set +x 00:16:07.306 21:17:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:07.306 21:17:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.306 21:17:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.306 21:17:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.306 21:17:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.306 21:17:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.306 21:17:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.307 21:17:01 -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.307 21:17:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.307 21:17:01 -- nvmf/common.sh@296 -- # e810=() 00:16:07.307 21:17:01 -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.307 21:17:01 -- nvmf/common.sh@297 -- # x722=() 00:16:07.307 21:17:01 -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.307 21:17:01 -- nvmf/common.sh@298 -- # mlx=() 00:16:07.307 21:17:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.307 21:17:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.307 21:17:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.307 21:17:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.307 21:17:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:07.307 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:07.307 21:17:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.307 21:17:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:07.307 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:07.307 21:17:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.307 21:17:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.307 21:17:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.307 21:17:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:07.307 Found net devices under 0000:27:00.0: cvl_0_0 00:16:07.307 21:17:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.307 21:17:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.307 21:17:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.307 21:17:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.307 21:17:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:07.307 Found net devices under 0000:27:00.1: cvl_0_1 00:16:07.307 21:17:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.307 21:17:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:07.307 21:17:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:07.307 21:17:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:07.307 21:17:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.307 21:17:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.307 21:17:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.307 21:17:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.307 21:17:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.307 21:17:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.307 21:17:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.307 21:17:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.307 21:17:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.307 21:17:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.307 21:17:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.307 21:17:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.307 21:17:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.307 21:17:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.307 21:17:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.307 21:17:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.307 21:17:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.566 21:17:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.566 21:17:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.566 21:17:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:16:07.566 00:16:07.566 --- 10.0.0.2 ping statistics --- 00:16:07.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.566 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:07.566 21:17:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:16:07.566 00:16:07.567 --- 10.0.0.1 ping statistics --- 00:16:07.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.567 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:07.567 21:17:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.567 21:17:01 -- nvmf/common.sh@411 -- # return 0 00:16:07.567 21:17:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:07.567 21:17:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.567 21:17:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:07.567 21:17:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:07.567 21:17:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.567 21:17:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:07.567 21:17:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:07.567 21:17:01 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:07.567 21:17:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:07.567 21:17:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:07.567 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.567 21:17:01 -- nvmf/common.sh@470 -- # nvmfpid=1383476 00:16:07.567 21:17:01 -- nvmf/common.sh@471 -- # waitforlisten 1383476 00:16:07.567 21:17:01 -- common/autotest_common.sh@817 -- # '[' -z 1383476 ']' 00:16:07.567 21:17:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.567 21:17:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.567 21:17:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.567 21:17:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.567 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.567 21:17:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:07.567 [2024-04-23 21:17:01.757405] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:16:07.567 [2024-04-23 21:17:01.757507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.567 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.827 [2024-04-23 21:17:01.878370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.827 [2024-04-23 21:17:01.975969] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.827 [2024-04-23 21:17:01.976005] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.827 [2024-04-23 21:17:01.976015] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.827 [2024-04-23 21:17:01.976024] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.827 [2024-04-23 21:17:01.976032] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.827 [2024-04-23 21:17:01.976190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.827 [2024-04-23 21:17:01.976297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.827 [2024-04-23 21:17:01.976311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.399 21:17:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.399 21:17:02 -- common/autotest_common.sh@850 -- # return 0 00:16:08.399 21:17:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:08.399 21:17:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 21:17:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.399 21:17:02 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 [2024-04-23 21:17:02.510895] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 Malloc0 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 Delay0 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 [2024-04-23 21:17:02.600796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:08.399 21:17:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.399 21:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.399 21:17:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.399 21:17:02 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:08.658 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.658 [2024-04-23 21:17:02.757222] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:11.198 Initializing NVMe Controllers 00:16:11.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:11.198 controller IO queue size 128 less than required 00:16:11.199 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:11.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:11.199 Initialization complete. Launching workers. 00:16:11.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41994 00:16:11.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42051, failed to submit 66 00:16:11.199 success 41994, unsuccess 57, failed 0 00:16:11.199 21:17:04 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:11.199 21:17:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.199 21:17:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.199 21:17:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.199 21:17:04 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:11.199 21:17:04 -- target/abort.sh@38 -- # nvmftestfini 00:16:11.199 21:17:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:11.199 21:17:04 -- nvmf/common.sh@117 -- # sync 00:16:11.199 21:17:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.199 21:17:04 -- nvmf/common.sh@120 -- # set +e 00:16:11.199 21:17:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.199 21:17:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.199 rmmod nvme_tcp 00:16:11.199 rmmod nvme_fabrics 00:16:11.199 rmmod nvme_keyring 00:16:11.199 21:17:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.199 21:17:04 -- nvmf/common.sh@124 -- # set -e 00:16:11.199 21:17:04 -- nvmf/common.sh@125 -- # return 0 00:16:11.199 21:17:04 -- nvmf/common.sh@478 -- # '[' -n 1383476 ']' 00:16:11.199 21:17:04 -- nvmf/common.sh@479 -- # killprocess 1383476 00:16:11.199 21:17:04 -- common/autotest_common.sh@936 -- # '[' -z 1383476 ']' 00:16:11.199 21:17:04 -- common/autotest_common.sh@940 -- # kill -0 1383476 00:16:11.199 21:17:04 -- common/autotest_common.sh@941 -- # uname 00:16:11.199 21:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.199 21:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1383476 00:16:11.199 21:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:11.199 21:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:11.199 21:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1383476' 00:16:11.199 killing process with pid 1383476 00:16:11.199 21:17:04 -- common/autotest_common.sh@955 -- # kill 1383476 00:16:11.199 21:17:04 -- common/autotest_common.sh@960 -- # wait 1383476 00:16:11.460 21:17:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:11.460 21:17:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:11.460 21:17:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:11.460 21:17:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.460 21:17:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.460 21:17:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.460 21:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.460 21:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.369 21:17:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.369 00:16:13.369 real 0m11.662s 00:16:13.369 user 0m13.536s 00:16:13.369 sys 0m5.165s 00:16:13.369 21:17:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.369 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.369 ************************************ 00:16:13.369 END TEST nvmf_abort 00:16:13.369 ************************************ 00:16:13.369 21:17:07 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:13.369 21:17:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.369 21:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.369 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.629 ************************************ 00:16:13.629 START TEST nvmf_ns_hotplug_stress 00:16:13.629 ************************************ 00:16:13.629 21:17:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:13.629 * Looking for test storage... 00:16:13.629 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:13.629 21:17:07 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.629 21:17:07 -- nvmf/common.sh@7 -- # uname -s 00:16:13.629 21:17:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.629 21:17:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.629 21:17:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.629 21:17:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.629 21:17:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.629 21:17:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.629 21:17:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.629 21:17:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.629 21:17:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.629 21:17:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.629 21:17:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:13.629 21:17:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:13.629 21:17:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.629 21:17:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.629 21:17:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:13.629 21:17:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.629 21:17:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:13.629 21:17:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.629 21:17:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.629 21:17:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.629 21:17:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.629 21:17:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.629 21:17:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.629 21:17:07 -- paths/export.sh@5 -- # export PATH 00:16:13.629 21:17:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.629 21:17:07 -- nvmf/common.sh@47 -- # : 0 00:16:13.629 21:17:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.629 21:17:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.629 21:17:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.629 21:17:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.629 21:17:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.629 21:17:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.629 21:17:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.629 21:17:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.629 21:17:07 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:13.629 21:17:07 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:16:13.629 21:17:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:13.629 21:17:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.629 21:17:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:13.629 21:17:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:13.629 21:17:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:13.629 21:17:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.629 21:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.629 21:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.629 21:17:07 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:13.629 21:17:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:13.629 21:17:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.629 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:16:18.909 21:17:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:18.910 21:17:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.910 21:17:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.910 21:17:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.910 21:17:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.910 21:17:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.910 21:17:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.910 21:17:12 -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.910 21:17:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.910 21:17:12 -- nvmf/common.sh@296 -- # e810=() 00:16:18.910 21:17:12 -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.910 21:17:12 -- nvmf/common.sh@297 -- # x722=() 00:16:18.910 21:17:12 -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.910 21:17:12 -- nvmf/common.sh@298 -- # mlx=() 00:16:18.910 21:17:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.910 21:17:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.910 21:17:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.910 21:17:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.910 21:17:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:18.910 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:18.910 21:17:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.910 21:17:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:18.910 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:18.910 21:17:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.910 21:17:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.910 21:17:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.910 21:17:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:18.910 Found net devices under 0000:27:00.0: cvl_0_0 00:16:18.910 21:17:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.910 21:17:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.910 21:17:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.910 21:17:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.910 21:17:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:18.910 Found net devices under 0000:27:00.1: cvl_0_1 00:16:18.910 21:17:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.910 21:17:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:18.910 21:17:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:18.910 21:17:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.910 21:17:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.910 21:17:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.910 21:17:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.910 21:17:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.910 21:17:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.910 21:17:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.910 21:17:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.910 21:17:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.910 21:17:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.910 21:17:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.910 21:17:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.910 21:17:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.910 21:17:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.910 21:17:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.910 21:17:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.910 21:17:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.910 21:17:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.910 21:17:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.910 21:17:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:16:18.910 00:16:18.910 --- 10.0.0.2 ping statistics --- 00:16:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.910 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:16:18.910 21:17:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:16:18.910 00:16:18.910 --- 10.0.0.1 ping statistics --- 00:16:18.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.910 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:18.910 21:17:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.910 21:17:12 -- nvmf/common.sh@411 -- # return 0 00:16:18.910 21:17:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:18.910 21:17:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.910 21:17:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:18.910 21:17:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.910 21:17:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:18.910 21:17:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:18.910 21:17:12 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:16:18.910 21:17:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:18.910 21:17:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:18.910 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:18.910 21:17:12 -- nvmf/common.sh@470 -- # nvmfpid=1388127 00:16:18.910 21:17:12 -- nvmf/common.sh@471 -- # waitforlisten 1388127 00:16:18.910 21:17:12 -- common/autotest_common.sh@817 -- # '[' -z 1388127 ']' 00:16:18.910 21:17:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.910 21:17:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.910 21:17:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.910 21:17:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.910 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:18.910 21:17:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:18.910 [2024-04-23 21:17:12.950441] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:16:18.910 [2024-04-23 21:17:12.950543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.910 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.910 [2024-04-23 21:17:13.072576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.910 [2024-04-23 21:17:13.170014] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.910 [2024-04-23 21:17:13.170047] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.911 [2024-04-23 21:17:13.170056] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.911 [2024-04-23 21:17:13.170065] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.911 [2024-04-23 21:17:13.170073] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.911 [2024-04-23 21:17:13.170213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.911 [2024-04-23 21:17:13.170323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.911 [2024-04-23 21:17:13.170334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.478 21:17:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:19.478 21:17:13 -- common/autotest_common.sh@850 -- # return 0 00:16:19.478 21:17:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:19.478 21:17:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:19.478 21:17:13 -- common/autotest_common.sh@10 -- # set +x 00:16:19.478 21:17:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.478 21:17:13 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:16:19.478 21:17:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:19.740 [2024-04-23 21:17:13.785308] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.740 21:17:13 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:19.740 21:17:13 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.999 [2024-04-23 21:17:14.119713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.999 21:17:14 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:20.260 21:17:14 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:20.260 Malloc0 00:16:20.260 21:17:14 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:20.520 Delay0 00:16:20.520 21:17:14 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.520 21:17:14 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:20.779 NULL1 00:16:20.779 21:17:14 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:21.039 21:17:15 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1388464 00:16:21.039 21:17:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:21.039 21:17:15 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:21.039 21:17:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.039 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.981 Read completed with error (sct=0, sc=11) 00:16:21.981 21:17:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.241 21:17:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:16:22.241 21:17:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:22.241 true 00:16:22.499 21:17:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:22.499 21:17:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.440 21:17:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.440 21:17:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:16:23.440 21:17:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:23.440 true 00:16:23.440 21:17:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:23.440 21:17:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.702 21:17:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.963 21:17:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:16:23.963 21:17:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:23.963 true 00:16:23.963 21:17:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:23.963 21:17:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.225 21:17:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.225 21:17:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:16:24.225 21:17:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:24.484 true 00:16:24.484 21:17:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:24.484 21:17:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.744 21:17:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.744 21:17:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:16:24.744 21:17:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:25.005 true 00:16:25.005 21:17:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:25.005 21:17:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.005 21:17:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.267 21:17:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:16:25.267 21:17:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:25.267 true 00:16:25.528 21:17:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:25.528 21:17:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.466 21:17:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.466 21:17:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:16:26.466 21:17:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:26.466 true 00:16:26.725 21:17:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:26.725 21:17:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.725 21:17:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.985 21:17:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:16:26.985 21:17:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:26.985 true 00:16:26.985 21:17:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:26.985 21:17:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.245 21:17:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.245 21:17:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:16:27.245 21:17:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:27.506 true 00:16:27.506 21:17:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:27.506 21:17:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.444 21:17:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.444 21:17:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:16:28.444 21:17:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:28.706 true 00:16:28.706 21:17:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:28.706 21:17:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.706 21:17:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.965 21:17:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:16:28.965 21:17:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:29.226 true 00:16:29.226 21:17:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:29.226 21:17:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.226 21:17:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.485 21:17:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:16:29.485 21:17:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:29.485 true 00:16:29.485 21:17:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:29.485 21:17:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.865 21:17:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.865 21:17:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:16:30.865 21:17:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:30.865 true 00:16:30.865 21:17:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:30.865 21:17:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.123 21:17:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.123 21:17:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:16:31.123 21:17:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:31.381 true 00:16:31.381 21:17:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:31.381 21:17:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.381 21:17:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.640 21:17:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:16:31.640 21:17:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:31.640 true 00:16:31.640 21:17:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:31.640 21:17:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.579 21:17:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.839 21:17:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:16:32.839 21:17:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:32.839 true 00:16:32.839 21:17:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:32.839 21:17:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.100 21:17:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.362 21:17:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:16:33.362 21:17:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:33.362 true 00:16:33.362 21:17:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:33.362 21:17:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.621 21:17:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.621 21:17:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:16:33.621 21:17:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:33.880 true 00:16:33.880 21:17:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:33.880 21:17:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.822 21:17:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.079 21:17:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:16:35.079 21:17:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:35.079 true 00:16:35.079 21:17:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:35.079 21:17:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.337 21:17:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.597 21:17:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:16:35.597 21:17:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:35.597 true 00:16:35.597 21:17:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:35.597 21:17:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.859 21:17:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.859 21:17:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:16:35.859 21:17:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:36.119 true 00:16:36.119 21:17:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:36.119 21:17:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.119 21:17:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.380 21:17:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:16:36.380 21:17:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:36.380 true 00:16:36.640 21:17:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:36.640 21:17:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.640 21:17:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.899 21:17:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:16:36.899 21:17:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:36.899 true 00:16:36.899 21:17:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:36.899 21:17:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.288 21:17:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.288 21:17:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:16:38.288 21:17:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:38.288 true 00:16:38.288 21:17:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:38.288 21:17:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.551 21:17:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.551 21:17:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:16:38.551 21:17:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:38.810 true 00:16:38.810 21:17:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:38.810 21:17:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.810 21:17:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.070 21:17:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:16:39.070 21:17:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:39.070 true 00:16:39.070 21:17:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:39.070 21:17:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.331 21:17:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.590 21:17:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:16:39.590 21:17:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:39.590 true 00:16:39.590 21:17:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:39.590 21:17:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.849 21:17:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.849 21:17:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:16:39.849 21:17:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:40.108 true 00:16:40.108 21:17:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:40.108 21:17:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.220 21:17:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.220 21:17:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:16:41.220 21:17:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:41.220 true 00:16:41.220 21:17:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:41.220 21:17:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.478 21:17:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.737 21:17:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:16:41.737 21:17:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:41.737 true 00:16:41.737 21:17:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:41.737 21:17:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.995 21:17:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.995 21:17:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:16:41.995 21:17:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:16:42.252 true 00:16:42.252 21:17:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:42.252 21:17:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.190 21:17:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.449 21:17:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:16:43.449 21:17:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:16:43.449 true 00:16:43.449 21:17:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:43.449 21:17:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.708 21:17:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.708 21:17:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:16:43.708 21:17:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:16:43.966 true 00:16:43.966 21:17:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:43.966 21:17:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.966 21:17:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.227 21:17:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:16:44.227 21:17:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:16:44.227 true 00:16:44.487 21:17:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:44.487 21:17:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.487 21:17:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.748 21:17:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:16:44.748 21:17:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:16:44.748 true 00:16:44.748 21:17:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:44.748 21:17:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.009 21:17:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:45.269 21:17:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:16:45.269 21:17:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:16:45.269 true 00:16:45.269 21:17:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:45.269 21:17:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.206 21:17:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.468 21:17:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:16:46.468 21:17:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:16:46.468 true 00:16:46.468 21:17:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:46.468 21:17:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.728 21:17:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.989 21:17:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:16:46.989 21:17:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:16:46.989 true 00:16:46.989 21:17:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:46.989 21:17:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.252 21:17:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.513 21:17:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:16:47.513 21:17:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:16:47.513 true 00:16:47.513 21:17:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:47.513 21:17:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.775 21:17:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.775 21:17:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:16:47.775 21:17:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:16:48.037 true 00:16:48.037 21:17:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:48.037 21:17:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.037 21:17:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.297 21:17:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:16:48.297 21:17:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:16:48.558 true 00:16:48.558 21:17:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:48.558 21:17:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:49.513 21:17:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:49.513 21:17:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:16:49.513 21:17:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:16:49.774 true 00:16:49.774 21:17:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:49.774 21:17:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.774 21:17:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.034 21:17:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:16:50.035 21:17:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:16:50.035 true 00:16:50.297 21:17:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:50.297 21:17:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.297 21:17:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.556 21:17:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:16:50.556 21:17:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:16:50.556 true 00:16:50.556 21:17:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:50.556 21:17:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.814 21:17:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.814 21:17:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:16:50.814 21:17:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:16:51.073 true 00:16:51.073 21:17:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:51.073 21:17:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.073 Initializing NVMe Controllers 00:16:51.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.073 Controller IO queue size 128, less than required. 00:16:51.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:51.073 Controller IO queue size 128, less than required. 00:16:51.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:51.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:51.073 Initialization complete. Launching workers. 00:16:51.073 ======================================================== 00:16:51.073 Latency(us) 00:16:51.073 Device Information : IOPS MiB/s Average min max 00:16:51.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 625.88 0.31 89491.42 2467.10 1086144.50 00:16:51.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12031.76 5.87 10638.49 2248.31 365621.39 00:16:51.073 ======================================================== 00:16:51.073 Total : 12657.65 6.18 14537.53 2248.31 1086144.50 00:16:51.073 00:16:51.334 21:17:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:51.334 21:17:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:16:51.334 21:17:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:16:51.595 true 00:16:51.595 21:17:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1388464 00:16:51.595 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1388464) - No such process 00:16:51.595 21:17:45 -- target/ns_hotplug_stress.sh@44 -- # wait 1388464 00:16:51.595 21:17:45 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:51.595 21:17:45 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:16:51.595 21:17:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:51.595 21:17:45 -- nvmf/common.sh@117 -- # sync 00:16:51.595 21:17:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.595 21:17:45 -- nvmf/common.sh@120 -- # set +e 00:16:51.595 21:17:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.595 21:17:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.595 rmmod nvme_tcp 00:16:51.595 rmmod nvme_fabrics 00:16:51.595 rmmod nvme_keyring 00:16:51.595 21:17:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.595 21:17:45 -- nvmf/common.sh@124 -- # set -e 00:16:51.595 21:17:45 -- nvmf/common.sh@125 -- # return 0 00:16:51.595 21:17:45 -- nvmf/common.sh@478 -- # '[' -n 1388127 ']' 00:16:51.595 21:17:45 -- nvmf/common.sh@479 -- # killprocess 1388127 00:16:51.595 21:17:45 -- common/autotest_common.sh@936 -- # '[' -z 1388127 ']' 00:16:51.595 21:17:45 -- common/autotest_common.sh@940 -- # kill -0 1388127 00:16:51.595 21:17:45 -- common/autotest_common.sh@941 -- # uname 00:16:51.595 21:17:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.595 21:17:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1388127 00:16:51.595 21:17:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:51.595 21:17:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:51.595 21:17:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1388127' 00:16:51.595 killing process with pid 1388127 00:16:51.595 21:17:45 -- common/autotest_common.sh@955 -- # kill 1388127 00:16:51.595 21:17:45 -- common/autotest_common.sh@960 -- # wait 1388127 00:16:52.165 21:17:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:52.165 21:17:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:52.165 21:17:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:52.165 21:17:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.165 21:17:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.165 21:17:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.165 21:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.165 21:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.071 21:17:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.071 00:16:54.071 real 0m40.632s 00:16:54.071 user 2m27.297s 00:16:54.071 sys 0m9.405s 00:16:54.071 21:17:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.071 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:16:54.071 ************************************ 00:16:54.071 END TEST nvmf_ns_hotplug_stress 00:16:54.071 ************************************ 00:16:54.331 21:17:48 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:54.331 21:17:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.331 21:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.331 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:16:54.331 ************************************ 00:16:54.331 START TEST nvmf_connect_stress 00:16:54.331 ************************************ 00:16:54.331 21:17:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:54.331 * Looking for test storage... 00:16:54.331 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:54.331 21:17:48 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.331 21:17:48 -- nvmf/common.sh@7 -- # uname -s 00:16:54.331 21:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.331 21:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.331 21:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.331 21:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.331 21:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.331 21:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.331 21:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.331 21:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.331 21:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.331 21:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.331 21:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:54.331 21:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:16:54.331 21:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.331 21:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.331 21:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:54.331 21:17:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.331 21:17:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:54.331 21:17:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.331 21:17:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.331 21:17:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.331 21:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.331 21:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.331 21:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.331 21:17:48 -- paths/export.sh@5 -- # export PATH 00:16:54.331 21:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.331 21:17:48 -- nvmf/common.sh@47 -- # : 0 00:16:54.331 21:17:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.331 21:17:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.331 21:17:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.331 21:17:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.331 21:17:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.331 21:17:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.331 21:17:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.331 21:17:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.331 21:17:48 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:54.331 21:17:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:54.331 21:17:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.331 21:17:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:54.331 21:17:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:54.331 21:17:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:54.331 21:17:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.331 21:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.331 21:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.331 21:17:48 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:54.331 21:17:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:54.331 21:17:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.331 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:16:59.611 21:17:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.611 21:17:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:59.611 21:17:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:59.611 21:17:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:59.611 21:17:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:59.611 21:17:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:59.611 21:17:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:59.611 21:17:53 -- nvmf/common.sh@295 -- # net_devs=() 00:16:59.611 21:17:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:59.611 21:17:53 -- nvmf/common.sh@296 -- # e810=() 00:16:59.611 21:17:53 -- nvmf/common.sh@296 -- # local -ga e810 00:16:59.611 21:17:53 -- nvmf/common.sh@297 -- # x722=() 00:16:59.611 21:17:53 -- nvmf/common.sh@297 -- # local -ga x722 00:16:59.611 21:17:53 -- nvmf/common.sh@298 -- # mlx=() 00:16:59.612 21:17:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:59.612 21:17:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.612 21:17:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:59.612 21:17:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.612 21:17:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:59.612 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:59.612 21:17:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:59.612 21:17:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:59.612 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:59.612 21:17:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.612 21:17:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.612 21:17:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.612 21:17:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:59.612 Found net devices under 0000:27:00.0: cvl_0_0 00:16:59.612 21:17:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.612 21:17:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:59.612 21:17:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.612 21:17:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.612 21:17:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:59.612 Found net devices under 0000:27:00.1: cvl_0_1 00:16:59.612 21:17:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.612 21:17:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:59.612 21:17:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:59.612 21:17:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.612 21:17:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.612 21:17:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.612 21:17:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:59.612 21:17:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.612 21:17:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.612 21:17:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:59.612 21:17:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.612 21:17:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.612 21:17:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:59.612 21:17:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:59.612 21:17:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.612 21:17:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.612 21:17:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.612 21:17:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.612 21:17:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:59.612 21:17:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.612 21:17:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.612 21:17:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.612 21:17:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:59.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:16:59.612 00:16:59.612 --- 10.0.0.2 ping statistics --- 00:16:59.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.612 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:16:59.612 21:17:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:16:59.612 00:16:59.612 --- 10.0.0.1 ping statistics --- 00:16:59.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.612 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:59.612 21:17:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.612 21:17:53 -- nvmf/common.sh@411 -- # return 0 00:16:59.612 21:17:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:59.612 21:17:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.612 21:17:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:59.612 21:17:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.612 21:17:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:59.612 21:17:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:59.612 21:17:53 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:59.612 21:17:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:59.612 21:17:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:59.612 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:16:59.612 21:17:53 -- nvmf/common.sh@470 -- # nvmfpid=1398362 00:16:59.612 21:17:53 -- nvmf/common.sh@471 -- # waitforlisten 1398362 00:16:59.612 21:17:53 -- common/autotest_common.sh@817 -- # '[' -z 1398362 ']' 00:16:59.612 21:17:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.612 21:17:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.612 21:17:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.612 21:17:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.612 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:16:59.612 21:17:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:59.612 [2024-04-23 21:17:53.515534] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:16:59.612 [2024-04-23 21:17:53.515650] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.612 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.612 [2024-04-23 21:17:53.634091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:59.612 [2024-04-23 21:17:53.730456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.612 [2024-04-23 21:17:53.730490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.612 [2024-04-23 21:17:53.730499] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.612 [2024-04-23 21:17:53.730508] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.612 [2024-04-23 21:17:53.730515] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.612 [2024-04-23 21:17:53.730665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.612 [2024-04-23 21:17:53.730777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.612 [2024-04-23 21:17:53.730788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.185 21:17:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.185 21:17:54 -- common/autotest_common.sh@850 -- # return 0 00:17:00.185 21:17:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:00.185 21:17:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:00.185 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 21:17:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.185 21:17:54 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:00.185 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.185 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 [2024-04-23 21:17:54.265004] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.185 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.185 21:17:54 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:00.185 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.185 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.185 21:17:54 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.185 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.185 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 [2024-04-23 21:17:54.307445] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.185 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.185 21:17:54 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:00.185 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.185 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.185 NULL1 00:17:00.185 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.185 21:17:54 -- target/connect_stress.sh@21 -- # PERF_PID=1398526 00:17:00.185 21:17:54 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:00.185 21:17:54 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:00.185 21:17:54 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # seq 1 20 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:00.186 21:17:54 -- target/connect_stress.sh@28 -- # cat 00:17:00.186 21:17:54 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:00.186 21:17:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.186 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.186 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:00.186 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.448 21:17:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.448 21:17:54 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:00.448 21:17:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.448 21:17:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.448 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:17:01.016 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.016 21:17:55 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:01.016 21:17:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.016 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.016 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:01.275 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.275 21:17:55 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:01.275 21:17:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.275 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.275 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:01.534 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.534 21:17:55 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:01.534 21:17:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.534 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.534 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:01.794 21:17:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.794 21:17:55 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:01.794 21:17:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.794 21:17:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.794 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:02.055 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.055 21:17:56 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:02.055 21:17:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.055 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.055 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.625 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.625 21:17:56 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:02.625 21:17:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.625 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.625 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 21:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 21:17:56 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:02.883 21:17:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.883 21:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.142 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.142 21:17:57 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:03.142 21:17:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.142 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.142 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:03.418 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.418 21:17:57 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:03.418 21:17:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.418 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.418 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:03.679 21:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.679 21:17:57 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:03.679 21:17:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.679 21:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.679 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.249 21:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.249 21:17:58 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:04.249 21:17:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.249 21:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.249 21:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:04.506 21:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.506 21:17:58 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:04.506 21:17:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.506 21:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.506 21:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:04.764 21:17:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.764 21:17:58 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:04.764 21:17:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.764 21:17:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.764 21:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:05.024 21:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.025 21:17:59 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:05.025 21:17:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.025 21:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.025 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:05.285 21:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.285 21:17:59 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:05.285 21:17:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.285 21:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.285 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:05.857 21:17:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.857 21:17:59 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:05.857 21:17:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.857 21:17:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.857 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:06.115 21:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.115 21:18:00 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:06.115 21:18:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.115 21:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.115 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:06.373 21:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.373 21:18:00 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:06.373 21:18:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.373 21:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.373 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:06.633 21:18:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.633 21:18:00 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:06.633 21:18:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.633 21:18:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.633 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:06.894 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.894 21:18:01 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:06.894 21:18:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.894 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.894 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.466 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.466 21:18:01 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:07.466 21:18:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.466 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.466 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.725 21:18:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.725 21:18:01 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:07.725 21:18:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.725 21:18:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.725 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.984 21:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.984 21:18:02 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:07.984 21:18:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.984 21:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.984 21:18:02 -- common/autotest_common.sh@10 -- # set +x 00:17:08.243 21:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.243 21:18:02 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:08.243 21:18:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.243 21:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.243 21:18:02 -- common/autotest_common.sh@10 -- # set +x 00:17:08.503 21:18:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.503 21:18:02 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:08.503 21:18:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.503 21:18:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.503 21:18:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.073 21:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.073 21:18:03 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:09.073 21:18:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.073 21:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.073 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:09.332 21:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.332 21:18:03 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:09.332 21:18:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.332 21:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.332 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:09.590 21:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.590 21:18:03 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:09.590 21:18:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.590 21:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.590 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:09.848 21:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.849 21:18:04 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:09.849 21:18:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.849 21:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.849 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.108 21:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.108 21:18:04 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:10.108 21:18:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.108 21:18:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.108 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.370 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:10.629 21:18:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.629 21:18:04 -- target/connect_stress.sh@34 -- # kill -0 1398526 00:17:10.629 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1398526) - No such process 00:17:10.629 21:18:04 -- target/connect_stress.sh@38 -- # wait 1398526 00:17:10.629 21:18:04 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:10.629 21:18:04 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:10.629 21:18:04 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:10.629 21:18:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:10.629 21:18:04 -- nvmf/common.sh@117 -- # sync 00:17:10.629 21:18:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.629 21:18:04 -- nvmf/common.sh@120 -- # set +e 00:17:10.629 21:18:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.629 21:18:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.629 rmmod nvme_tcp 00:17:10.629 rmmod nvme_fabrics 00:17:10.629 rmmod nvme_keyring 00:17:10.629 21:18:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.629 21:18:04 -- nvmf/common.sh@124 -- # set -e 00:17:10.629 21:18:04 -- nvmf/common.sh@125 -- # return 0 00:17:10.629 21:18:04 -- nvmf/common.sh@478 -- # '[' -n 1398362 ']' 00:17:10.629 21:18:04 -- nvmf/common.sh@479 -- # killprocess 1398362 00:17:10.629 21:18:04 -- common/autotest_common.sh@936 -- # '[' -z 1398362 ']' 00:17:10.629 21:18:04 -- common/autotest_common.sh@940 -- # kill -0 1398362 00:17:10.629 21:18:04 -- common/autotest_common.sh@941 -- # uname 00:17:10.629 21:18:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.629 21:18:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1398362 00:17:10.629 21:18:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:10.629 21:18:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:10.629 21:18:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1398362' 00:17:10.629 killing process with pid 1398362 00:17:10.629 21:18:04 -- common/autotest_common.sh@955 -- # kill 1398362 00:17:10.629 21:18:04 -- common/autotest_common.sh@960 -- # wait 1398362 00:17:11.199 21:18:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.199 21:18:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.199 21:18:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.200 21:18:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.200 21:18:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.200 21:18:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.200 21:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.200 21:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.116 21:18:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:13.116 00:17:13.116 real 0m18.871s 00:17:13.116 user 0m41.930s 00:17:13.116 sys 0m7.112s 00:17:13.116 21:18:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.116 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.116 ************************************ 00:17:13.116 END TEST nvmf_connect_stress 00:17:13.116 ************************************ 00:17:13.116 21:18:07 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:13.116 21:18:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:13.116 21:18:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.116 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.375 ************************************ 00:17:13.375 START TEST nvmf_fused_ordering 00:17:13.375 ************************************ 00:17:13.375 21:18:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:13.375 * Looking for test storage... 00:17:13.375 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:13.375 21:18:07 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.375 21:18:07 -- nvmf/common.sh@7 -- # uname -s 00:17:13.375 21:18:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.375 21:18:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.375 21:18:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.375 21:18:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.375 21:18:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.375 21:18:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.375 21:18:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.375 21:18:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.375 21:18:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.375 21:18:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.375 21:18:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:13.375 21:18:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:13.375 21:18:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.375 21:18:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.375 21:18:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:13.375 21:18:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.375 21:18:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:13.375 21:18:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.375 21:18:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.375 21:18:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.375 21:18:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.375 21:18:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.376 21:18:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.376 21:18:07 -- paths/export.sh@5 -- # export PATH 00:17:13.376 21:18:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.376 21:18:07 -- nvmf/common.sh@47 -- # : 0 00:17:13.376 21:18:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.376 21:18:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.376 21:18:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.376 21:18:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.376 21:18:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.376 21:18:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.376 21:18:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.376 21:18:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.376 21:18:07 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:13.376 21:18:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:13.376 21:18:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.376 21:18:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:13.376 21:18:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:13.376 21:18:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:13.376 21:18:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.376 21:18:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.376 21:18:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.376 21:18:07 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:13.376 21:18:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:13.376 21:18:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.376 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:17:18.793 21:18:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:18.793 21:18:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.793 21:18:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.793 21:18:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.793 21:18:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.793 21:18:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.793 21:18:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.793 21:18:12 -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.793 21:18:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.793 21:18:12 -- nvmf/common.sh@296 -- # e810=() 00:17:18.793 21:18:12 -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.793 21:18:12 -- nvmf/common.sh@297 -- # x722=() 00:17:18.793 21:18:12 -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.793 21:18:12 -- nvmf/common.sh@298 -- # mlx=() 00:17:18.793 21:18:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.793 21:18:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.793 21:18:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.793 21:18:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.793 21:18:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:18.793 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:18.793 21:18:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.793 21:18:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:18.793 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:18.793 21:18:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.793 21:18:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.793 21:18:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.793 21:18:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:18.793 Found net devices under 0000:27:00.0: cvl_0_0 00:17:18.793 21:18:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.793 21:18:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.793 21:18:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.793 21:18:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.793 21:18:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:18.793 Found net devices under 0000:27:00.1: cvl_0_1 00:17:18.793 21:18:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.793 21:18:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:18.793 21:18:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:18.793 21:18:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.793 21:18:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.793 21:18:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.793 21:18:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.793 21:18:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.793 21:18:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.793 21:18:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.793 21:18:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.793 21:18:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.793 21:18:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.793 21:18:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.793 21:18:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.793 21:18:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.793 21:18:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.793 21:18:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.793 21:18:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.793 21:18:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.793 21:18:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.793 21:18:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.793 21:18:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:17:18.793 00:17:18.793 --- 10.0.0.2 ping statistics --- 00:17:18.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.793 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:17:18.793 21:18:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:17:18.793 00:17:18.793 --- 10.0.0.1 ping statistics --- 00:17:18.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.793 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:17:18.793 21:18:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.793 21:18:12 -- nvmf/common.sh@411 -- # return 0 00:17:18.793 21:18:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:18.793 21:18:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.793 21:18:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:18.793 21:18:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.793 21:18:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:18.793 21:18:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:18.793 21:18:12 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:18.793 21:18:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:18.793 21:18:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.793 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:17:18.793 21:18:12 -- nvmf/common.sh@470 -- # nvmfpid=1405191 00:17:18.793 21:18:12 -- nvmf/common.sh@471 -- # waitforlisten 1405191 00:17:18.794 21:18:12 -- common/autotest_common.sh@817 -- # '[' -z 1405191 ']' 00:17:18.794 21:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.794 21:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.794 21:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.794 21:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.794 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:17:18.794 21:18:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.794 [2024-04-23 21:18:12.844638] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:17:18.794 [2024-04-23 21:18:12.844744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.794 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.794 [2024-04-23 21:18:12.966188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.794 [2024-04-23 21:18:13.061485] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.794 [2024-04-23 21:18:13.061524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.794 [2024-04-23 21:18:13.061534] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.794 [2024-04-23 21:18:13.061543] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.794 [2024-04-23 21:18:13.061550] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.794 [2024-04-23 21:18:13.061575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.362 21:18:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.362 21:18:13 -- common/autotest_common.sh@850 -- # return 0 00:17:19.362 21:18:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:19.362 21:18:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 21:18:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.362 21:18:13 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 [2024-04-23 21:18:13.583159] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 [2024-04-23 21:18:13.603374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 NULL1 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:19.362 21:18:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.362 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:17:19.362 21:18:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.362 21:18:13 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:19.622 [2024-04-23 21:18:13.670814] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:17:19.622 [2024-04-23 21:18:13.670892] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405231 ] 00:17:19.622 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.191 Attached to nqn.2016-06.io.spdk:cnode1 00:17:20.191 Namespace ID: 1 size: 1GB 00:17:20.191 fused_ordering(0) 00:17:20.191 fused_ordering(1) 00:17:20.191 fused_ordering(2) 00:17:20.192 fused_ordering(3) 00:17:20.192 fused_ordering(4) 00:17:20.192 fused_ordering(5) 00:17:20.192 fused_ordering(6) 00:17:20.192 fused_ordering(7) 00:17:20.192 fused_ordering(8) 00:17:20.192 fused_ordering(9) 00:17:20.192 fused_ordering(10) 00:17:20.192 fused_ordering(11) 00:17:20.192 fused_ordering(12) 00:17:20.192 fused_ordering(13) 00:17:20.192 fused_ordering(14) 00:17:20.192 fused_ordering(15) 00:17:20.192 fused_ordering(16) 00:17:20.192 fused_ordering(17) 00:17:20.192 fused_ordering(18) 00:17:20.192 fused_ordering(19) 00:17:20.192 fused_ordering(20) 00:17:20.192 fused_ordering(21) 00:17:20.192 fused_ordering(22) 00:17:20.192 fused_ordering(23) 00:17:20.192 fused_ordering(24) 00:17:20.192 fused_ordering(25) 00:17:20.192 fused_ordering(26) 00:17:20.192 fused_ordering(27) 00:17:20.192 fused_ordering(28) 00:17:20.192 fused_ordering(29) 00:17:20.192 fused_ordering(30) 00:17:20.192 fused_ordering(31) 00:17:20.192 fused_ordering(32) 00:17:20.192 fused_ordering(33) 00:17:20.192 fused_ordering(34) 00:17:20.192 fused_ordering(35) 00:17:20.192 fused_ordering(36) 00:17:20.192 fused_ordering(37) 00:17:20.192 fused_ordering(38) 00:17:20.192 fused_ordering(39) 00:17:20.192 fused_ordering(40) 00:17:20.192 fused_ordering(41) 00:17:20.192 fused_ordering(42) 00:17:20.192 fused_ordering(43) 00:17:20.192 fused_ordering(44) 00:17:20.192 fused_ordering(45) 00:17:20.192 fused_ordering(46) 00:17:20.192 fused_ordering(47) 00:17:20.192 fused_ordering(48) 00:17:20.192 fused_ordering(49) 00:17:20.192 fused_ordering(50) 00:17:20.192 fused_ordering(51) 00:17:20.192 fused_ordering(52) 00:17:20.192 fused_ordering(53) 00:17:20.192 fused_ordering(54) 00:17:20.192 fused_ordering(55) 00:17:20.192 fused_ordering(56) 00:17:20.192 fused_ordering(57) 00:17:20.192 fused_ordering(58) 00:17:20.192 fused_ordering(59) 00:17:20.192 fused_ordering(60) 00:17:20.192 fused_ordering(61) 00:17:20.192 fused_ordering(62) 00:17:20.192 fused_ordering(63) 00:17:20.192 fused_ordering(64) 00:17:20.192 fused_ordering(65) 00:17:20.192 fused_ordering(66) 00:17:20.192 fused_ordering(67) 00:17:20.192 fused_ordering(68) 00:17:20.192 fused_ordering(69) 00:17:20.192 fused_ordering(70) 00:17:20.192 fused_ordering(71) 00:17:20.192 fused_ordering(72) 00:17:20.192 fused_ordering(73) 00:17:20.192 fused_ordering(74) 00:17:20.192 fused_ordering(75) 00:17:20.192 fused_ordering(76) 00:17:20.192 fused_ordering(77) 00:17:20.192 fused_ordering(78) 00:17:20.192 fused_ordering(79) 00:17:20.192 fused_ordering(80) 00:17:20.192 fused_ordering(81) 00:17:20.192 fused_ordering(82) 00:17:20.192 fused_ordering(83) 00:17:20.192 fused_ordering(84) 00:17:20.192 fused_ordering(85) 00:17:20.192 fused_ordering(86) 00:17:20.192 fused_ordering(87) 00:17:20.192 fused_ordering(88) 00:17:20.192 fused_ordering(89) 00:17:20.192 fused_ordering(90) 00:17:20.192 fused_ordering(91) 00:17:20.192 fused_ordering(92) 00:17:20.192 fused_ordering(93) 00:17:20.192 fused_ordering(94) 00:17:20.192 fused_ordering(95) 00:17:20.192 fused_ordering(96) 00:17:20.192 fused_ordering(97) 00:17:20.192 fused_ordering(98) 00:17:20.192 fused_ordering(99) 00:17:20.192 fused_ordering(100) 00:17:20.192 fused_ordering(101) 00:17:20.192 fused_ordering(102) 00:17:20.192 fused_ordering(103) 00:17:20.192 fused_ordering(104) 00:17:20.192 fused_ordering(105) 00:17:20.192 fused_ordering(106) 00:17:20.192 fused_ordering(107) 00:17:20.192 fused_ordering(108) 00:17:20.192 fused_ordering(109) 00:17:20.192 fused_ordering(110) 00:17:20.192 fused_ordering(111) 00:17:20.192 fused_ordering(112) 00:17:20.192 fused_ordering(113) 00:17:20.192 fused_ordering(114) 00:17:20.192 fused_ordering(115) 00:17:20.192 fused_ordering(116) 00:17:20.192 fused_ordering(117) 00:17:20.192 fused_ordering(118) 00:17:20.192 fused_ordering(119) 00:17:20.192 fused_ordering(120) 00:17:20.192 fused_ordering(121) 00:17:20.192 fused_ordering(122) 00:17:20.192 fused_ordering(123) 00:17:20.192 fused_ordering(124) 00:17:20.192 fused_ordering(125) 00:17:20.192 fused_ordering(126) 00:17:20.192 fused_ordering(127) 00:17:20.192 fused_ordering(128) 00:17:20.192 fused_ordering(129) 00:17:20.192 fused_ordering(130) 00:17:20.192 fused_ordering(131) 00:17:20.192 fused_ordering(132) 00:17:20.192 fused_ordering(133) 00:17:20.192 fused_ordering(134) 00:17:20.192 fused_ordering(135) 00:17:20.192 fused_ordering(136) 00:17:20.192 fused_ordering(137) 00:17:20.192 fused_ordering(138) 00:17:20.192 fused_ordering(139) 00:17:20.192 fused_ordering(140) 00:17:20.192 fused_ordering(141) 00:17:20.192 fused_ordering(142) 00:17:20.192 fused_ordering(143) 00:17:20.192 fused_ordering(144) 00:17:20.192 fused_ordering(145) 00:17:20.192 fused_ordering(146) 00:17:20.192 fused_ordering(147) 00:17:20.192 fused_ordering(148) 00:17:20.192 fused_ordering(149) 00:17:20.192 fused_ordering(150) 00:17:20.192 fused_ordering(151) 00:17:20.192 fused_ordering(152) 00:17:20.192 fused_ordering(153) 00:17:20.192 fused_ordering(154) 00:17:20.192 fused_ordering(155) 00:17:20.192 fused_ordering(156) 00:17:20.192 fused_ordering(157) 00:17:20.192 fused_ordering(158) 00:17:20.192 fused_ordering(159) 00:17:20.192 fused_ordering(160) 00:17:20.192 fused_ordering(161) 00:17:20.192 fused_ordering(162) 00:17:20.192 fused_ordering(163) 00:17:20.192 fused_ordering(164) 00:17:20.192 fused_ordering(165) 00:17:20.192 fused_ordering(166) 00:17:20.192 fused_ordering(167) 00:17:20.192 fused_ordering(168) 00:17:20.192 fused_ordering(169) 00:17:20.192 fused_ordering(170) 00:17:20.192 fused_ordering(171) 00:17:20.192 fused_ordering(172) 00:17:20.192 fused_ordering(173) 00:17:20.192 fused_ordering(174) 00:17:20.192 fused_ordering(175) 00:17:20.192 fused_ordering(176) 00:17:20.192 fused_ordering(177) 00:17:20.192 fused_ordering(178) 00:17:20.192 fused_ordering(179) 00:17:20.192 fused_ordering(180) 00:17:20.192 fused_ordering(181) 00:17:20.192 fused_ordering(182) 00:17:20.192 fused_ordering(183) 00:17:20.192 fused_ordering(184) 00:17:20.192 fused_ordering(185) 00:17:20.192 fused_ordering(186) 00:17:20.192 fused_ordering(187) 00:17:20.192 fused_ordering(188) 00:17:20.192 fused_ordering(189) 00:17:20.192 fused_ordering(190) 00:17:20.192 fused_ordering(191) 00:17:20.192 fused_ordering(192) 00:17:20.192 fused_ordering(193) 00:17:20.192 fused_ordering(194) 00:17:20.192 fused_ordering(195) 00:17:20.192 fused_ordering(196) 00:17:20.192 fused_ordering(197) 00:17:20.192 fused_ordering(198) 00:17:20.192 fused_ordering(199) 00:17:20.192 fused_ordering(200) 00:17:20.192 fused_ordering(201) 00:17:20.192 fused_ordering(202) 00:17:20.192 fused_ordering(203) 00:17:20.192 fused_ordering(204) 00:17:20.192 fused_ordering(205) 00:17:20.451 fused_ordering(206) 00:17:20.451 fused_ordering(207) 00:17:20.451 fused_ordering(208) 00:17:20.451 fused_ordering(209) 00:17:20.451 fused_ordering(210) 00:17:20.451 fused_ordering(211) 00:17:20.451 fused_ordering(212) 00:17:20.451 fused_ordering(213) 00:17:20.451 fused_ordering(214) 00:17:20.451 fused_ordering(215) 00:17:20.451 fused_ordering(216) 00:17:20.451 fused_ordering(217) 00:17:20.451 fused_ordering(218) 00:17:20.451 fused_ordering(219) 00:17:20.451 fused_ordering(220) 00:17:20.451 fused_ordering(221) 00:17:20.451 fused_ordering(222) 00:17:20.451 fused_ordering(223) 00:17:20.451 fused_ordering(224) 00:17:20.451 fused_ordering(225) 00:17:20.451 fused_ordering(226) 00:17:20.451 fused_ordering(227) 00:17:20.451 fused_ordering(228) 00:17:20.451 fused_ordering(229) 00:17:20.451 fused_ordering(230) 00:17:20.451 fused_ordering(231) 00:17:20.451 fused_ordering(232) 00:17:20.451 fused_ordering(233) 00:17:20.451 fused_ordering(234) 00:17:20.451 fused_ordering(235) 00:17:20.451 fused_ordering(236) 00:17:20.451 fused_ordering(237) 00:17:20.451 fused_ordering(238) 00:17:20.451 fused_ordering(239) 00:17:20.451 fused_ordering(240) 00:17:20.451 fused_ordering(241) 00:17:20.451 fused_ordering(242) 00:17:20.451 fused_ordering(243) 00:17:20.451 fused_ordering(244) 00:17:20.451 fused_ordering(245) 00:17:20.451 fused_ordering(246) 00:17:20.451 fused_ordering(247) 00:17:20.451 fused_ordering(248) 00:17:20.451 fused_ordering(249) 00:17:20.451 fused_ordering(250) 00:17:20.451 fused_ordering(251) 00:17:20.451 fused_ordering(252) 00:17:20.451 fused_ordering(253) 00:17:20.451 fused_ordering(254) 00:17:20.451 fused_ordering(255) 00:17:20.451 fused_ordering(256) 00:17:20.451 fused_ordering(257) 00:17:20.451 fused_ordering(258) 00:17:20.451 fused_ordering(259) 00:17:20.451 fused_ordering(260) 00:17:20.451 fused_ordering(261) 00:17:20.451 fused_ordering(262) 00:17:20.451 fused_ordering(263) 00:17:20.451 fused_ordering(264) 00:17:20.451 fused_ordering(265) 00:17:20.451 fused_ordering(266) 00:17:20.451 fused_ordering(267) 00:17:20.451 fused_ordering(268) 00:17:20.451 fused_ordering(269) 00:17:20.451 fused_ordering(270) 00:17:20.451 fused_ordering(271) 00:17:20.451 fused_ordering(272) 00:17:20.451 fused_ordering(273) 00:17:20.451 fused_ordering(274) 00:17:20.451 fused_ordering(275) 00:17:20.451 fused_ordering(276) 00:17:20.451 fused_ordering(277) 00:17:20.451 fused_ordering(278) 00:17:20.451 fused_ordering(279) 00:17:20.451 fused_ordering(280) 00:17:20.451 fused_ordering(281) 00:17:20.451 fused_ordering(282) 00:17:20.451 fused_ordering(283) 00:17:20.451 fused_ordering(284) 00:17:20.451 fused_ordering(285) 00:17:20.451 fused_ordering(286) 00:17:20.451 fused_ordering(287) 00:17:20.451 fused_ordering(288) 00:17:20.451 fused_ordering(289) 00:17:20.451 fused_ordering(290) 00:17:20.451 fused_ordering(291) 00:17:20.451 fused_ordering(292) 00:17:20.451 fused_ordering(293) 00:17:20.451 fused_ordering(294) 00:17:20.451 fused_ordering(295) 00:17:20.451 fused_ordering(296) 00:17:20.451 fused_ordering(297) 00:17:20.451 fused_ordering(298) 00:17:20.451 fused_ordering(299) 00:17:20.451 fused_ordering(300) 00:17:20.451 fused_ordering(301) 00:17:20.451 fused_ordering(302) 00:17:20.451 fused_ordering(303) 00:17:20.451 fused_ordering(304) 00:17:20.451 fused_ordering(305) 00:17:20.451 fused_ordering(306) 00:17:20.451 fused_ordering(307) 00:17:20.451 fused_ordering(308) 00:17:20.451 fused_ordering(309) 00:17:20.451 fused_ordering(310) 00:17:20.451 fused_ordering(311) 00:17:20.451 fused_ordering(312) 00:17:20.451 fused_ordering(313) 00:17:20.451 fused_ordering(314) 00:17:20.451 fused_ordering(315) 00:17:20.451 fused_ordering(316) 00:17:20.451 fused_ordering(317) 00:17:20.451 fused_ordering(318) 00:17:20.451 fused_ordering(319) 00:17:20.451 fused_ordering(320) 00:17:20.451 fused_ordering(321) 00:17:20.451 fused_ordering(322) 00:17:20.451 fused_ordering(323) 00:17:20.451 fused_ordering(324) 00:17:20.451 fused_ordering(325) 00:17:20.451 fused_ordering(326) 00:17:20.451 fused_ordering(327) 00:17:20.451 fused_ordering(328) 00:17:20.451 fused_ordering(329) 00:17:20.451 fused_ordering(330) 00:17:20.451 fused_ordering(331) 00:17:20.451 fused_ordering(332) 00:17:20.451 fused_ordering(333) 00:17:20.451 fused_ordering(334) 00:17:20.451 fused_ordering(335) 00:17:20.451 fused_ordering(336) 00:17:20.451 fused_ordering(337) 00:17:20.451 fused_ordering(338) 00:17:20.451 fused_ordering(339) 00:17:20.451 fused_ordering(340) 00:17:20.451 fused_ordering(341) 00:17:20.451 fused_ordering(342) 00:17:20.451 fused_ordering(343) 00:17:20.451 fused_ordering(344) 00:17:20.451 fused_ordering(345) 00:17:20.451 fused_ordering(346) 00:17:20.451 fused_ordering(347) 00:17:20.451 fused_ordering(348) 00:17:20.451 fused_ordering(349) 00:17:20.451 fused_ordering(350) 00:17:20.451 fused_ordering(351) 00:17:20.451 fused_ordering(352) 00:17:20.451 fused_ordering(353) 00:17:20.451 fused_ordering(354) 00:17:20.451 fused_ordering(355) 00:17:20.451 fused_ordering(356) 00:17:20.451 fused_ordering(357) 00:17:20.451 fused_ordering(358) 00:17:20.451 fused_ordering(359) 00:17:20.451 fused_ordering(360) 00:17:20.451 fused_ordering(361) 00:17:20.451 fused_ordering(362) 00:17:20.451 fused_ordering(363) 00:17:20.451 fused_ordering(364) 00:17:20.451 fused_ordering(365) 00:17:20.451 fused_ordering(366) 00:17:20.451 fused_ordering(367) 00:17:20.451 fused_ordering(368) 00:17:20.451 fused_ordering(369) 00:17:20.451 fused_ordering(370) 00:17:20.451 fused_ordering(371) 00:17:20.451 fused_ordering(372) 00:17:20.451 fused_ordering(373) 00:17:20.451 fused_ordering(374) 00:17:20.451 fused_ordering(375) 00:17:20.451 fused_ordering(376) 00:17:20.451 fused_ordering(377) 00:17:20.451 fused_ordering(378) 00:17:20.451 fused_ordering(379) 00:17:20.451 fused_ordering(380) 00:17:20.451 fused_ordering(381) 00:17:20.451 fused_ordering(382) 00:17:20.451 fused_ordering(383) 00:17:20.451 fused_ordering(384) 00:17:20.451 fused_ordering(385) 00:17:20.451 fused_ordering(386) 00:17:20.451 fused_ordering(387) 00:17:20.451 fused_ordering(388) 00:17:20.451 fused_ordering(389) 00:17:20.451 fused_ordering(390) 00:17:20.451 fused_ordering(391) 00:17:20.451 fused_ordering(392) 00:17:20.451 fused_ordering(393) 00:17:20.451 fused_ordering(394) 00:17:20.451 fused_ordering(395) 00:17:20.451 fused_ordering(396) 00:17:20.451 fused_ordering(397) 00:17:20.451 fused_ordering(398) 00:17:20.451 fused_ordering(399) 00:17:20.451 fused_ordering(400) 00:17:20.451 fused_ordering(401) 00:17:20.451 fused_ordering(402) 00:17:20.451 fused_ordering(403) 00:17:20.451 fused_ordering(404) 00:17:20.451 fused_ordering(405) 00:17:20.451 fused_ordering(406) 00:17:20.451 fused_ordering(407) 00:17:20.451 fused_ordering(408) 00:17:20.451 fused_ordering(409) 00:17:20.451 fused_ordering(410) 00:17:21.020 fused_ordering(411) 00:17:21.020 fused_ordering(412) 00:17:21.020 fused_ordering(413) 00:17:21.020 fused_ordering(414) 00:17:21.020 fused_ordering(415) 00:17:21.020 fused_ordering(416) 00:17:21.020 fused_ordering(417) 00:17:21.020 fused_ordering(418) 00:17:21.020 fused_ordering(419) 00:17:21.020 fused_ordering(420) 00:17:21.020 fused_ordering(421) 00:17:21.020 fused_ordering(422) 00:17:21.020 fused_ordering(423) 00:17:21.020 fused_ordering(424) 00:17:21.020 fused_ordering(425) 00:17:21.020 fused_ordering(426) 00:17:21.020 fused_ordering(427) 00:17:21.020 fused_ordering(428) 00:17:21.020 fused_ordering(429) 00:17:21.020 fused_ordering(430) 00:17:21.020 fused_ordering(431) 00:17:21.020 fused_ordering(432) 00:17:21.020 fused_ordering(433) 00:17:21.020 fused_ordering(434) 00:17:21.020 fused_ordering(435) 00:17:21.020 fused_ordering(436) 00:17:21.020 fused_ordering(437) 00:17:21.020 fused_ordering(438) 00:17:21.020 fused_ordering(439) 00:17:21.020 fused_ordering(440) 00:17:21.020 fused_ordering(441) 00:17:21.020 fused_ordering(442) 00:17:21.020 fused_ordering(443) 00:17:21.020 fused_ordering(444) 00:17:21.020 fused_ordering(445) 00:17:21.020 fused_ordering(446) 00:17:21.020 fused_ordering(447) 00:17:21.020 fused_ordering(448) 00:17:21.020 fused_ordering(449) 00:17:21.020 fused_ordering(450) 00:17:21.020 fused_ordering(451) 00:17:21.020 fused_ordering(452) 00:17:21.020 fused_ordering(453) 00:17:21.020 fused_ordering(454) 00:17:21.020 fused_ordering(455) 00:17:21.020 fused_ordering(456) 00:17:21.020 fused_ordering(457) 00:17:21.020 fused_ordering(458) 00:17:21.020 fused_ordering(459) 00:17:21.020 fused_ordering(460) 00:17:21.020 fused_ordering(461) 00:17:21.020 fused_ordering(462) 00:17:21.020 fused_ordering(463) 00:17:21.020 fused_ordering(464) 00:17:21.020 fused_ordering(465) 00:17:21.020 fused_ordering(466) 00:17:21.020 fused_ordering(467) 00:17:21.020 fused_ordering(468) 00:17:21.020 fused_ordering(469) 00:17:21.020 fused_ordering(470) 00:17:21.020 fused_ordering(471) 00:17:21.020 fused_ordering(472) 00:17:21.020 fused_ordering(473) 00:17:21.020 fused_ordering(474) 00:17:21.020 fused_ordering(475) 00:17:21.020 fused_ordering(476) 00:17:21.020 fused_ordering(477) 00:17:21.020 fused_ordering(478) 00:17:21.020 fused_ordering(479) 00:17:21.020 fused_ordering(480) 00:17:21.020 fused_ordering(481) 00:17:21.020 fused_ordering(482) 00:17:21.020 fused_ordering(483) 00:17:21.020 fused_ordering(484) 00:17:21.020 fused_ordering(485) 00:17:21.020 fused_ordering(486) 00:17:21.020 fused_ordering(487) 00:17:21.020 fused_ordering(488) 00:17:21.020 fused_ordering(489) 00:17:21.020 fused_ordering(490) 00:17:21.020 fused_ordering(491) 00:17:21.020 fused_ordering(492) 00:17:21.020 fused_ordering(493) 00:17:21.020 fused_ordering(494) 00:17:21.020 fused_ordering(495) 00:17:21.020 fused_ordering(496) 00:17:21.020 fused_ordering(497) 00:17:21.020 fused_ordering(498) 00:17:21.020 fused_ordering(499) 00:17:21.020 fused_ordering(500) 00:17:21.020 fused_ordering(501) 00:17:21.020 fused_ordering(502) 00:17:21.020 fused_ordering(503) 00:17:21.020 fused_ordering(504) 00:17:21.020 fused_ordering(505) 00:17:21.020 fused_ordering(506) 00:17:21.020 fused_ordering(507) 00:17:21.020 fused_ordering(508) 00:17:21.020 fused_ordering(509) 00:17:21.020 fused_ordering(510) 00:17:21.020 fused_ordering(511) 00:17:21.020 fused_ordering(512) 00:17:21.020 fused_ordering(513) 00:17:21.020 fused_ordering(514) 00:17:21.020 fused_ordering(515) 00:17:21.020 fused_ordering(516) 00:17:21.020 fused_ordering(517) 00:17:21.020 fused_ordering(518) 00:17:21.020 fused_ordering(519) 00:17:21.020 fused_ordering(520) 00:17:21.020 fused_ordering(521) 00:17:21.020 fused_ordering(522) 00:17:21.020 fused_ordering(523) 00:17:21.020 fused_ordering(524) 00:17:21.020 fused_ordering(525) 00:17:21.020 fused_ordering(526) 00:17:21.020 fused_ordering(527) 00:17:21.020 fused_ordering(528) 00:17:21.020 fused_ordering(529) 00:17:21.020 fused_ordering(530) 00:17:21.020 fused_ordering(531) 00:17:21.020 fused_ordering(532) 00:17:21.020 fused_ordering(533) 00:17:21.020 fused_ordering(534) 00:17:21.020 fused_ordering(535) 00:17:21.020 fused_ordering(536) 00:17:21.020 fused_ordering(537) 00:17:21.020 fused_ordering(538) 00:17:21.020 fused_ordering(539) 00:17:21.020 fused_ordering(540) 00:17:21.020 fused_ordering(541) 00:17:21.020 fused_ordering(542) 00:17:21.020 fused_ordering(543) 00:17:21.020 fused_ordering(544) 00:17:21.020 fused_ordering(545) 00:17:21.020 fused_ordering(546) 00:17:21.020 fused_ordering(547) 00:17:21.020 fused_ordering(548) 00:17:21.020 fused_ordering(549) 00:17:21.020 fused_ordering(550) 00:17:21.020 fused_ordering(551) 00:17:21.020 fused_ordering(552) 00:17:21.020 fused_ordering(553) 00:17:21.020 fused_ordering(554) 00:17:21.020 fused_ordering(555) 00:17:21.020 fused_ordering(556) 00:17:21.020 fused_ordering(557) 00:17:21.020 fused_ordering(558) 00:17:21.020 fused_ordering(559) 00:17:21.020 fused_ordering(560) 00:17:21.020 fused_ordering(561) 00:17:21.020 fused_ordering(562) 00:17:21.020 fused_ordering(563) 00:17:21.020 fused_ordering(564) 00:17:21.020 fused_ordering(565) 00:17:21.020 fused_ordering(566) 00:17:21.020 fused_ordering(567) 00:17:21.020 fused_ordering(568) 00:17:21.020 fused_ordering(569) 00:17:21.020 fused_ordering(570) 00:17:21.020 fused_ordering(571) 00:17:21.020 fused_ordering(572) 00:17:21.020 fused_ordering(573) 00:17:21.020 fused_ordering(574) 00:17:21.020 fused_ordering(575) 00:17:21.020 fused_ordering(576) 00:17:21.020 fused_ordering(577) 00:17:21.020 fused_ordering(578) 00:17:21.020 fused_ordering(579) 00:17:21.020 fused_ordering(580) 00:17:21.020 fused_ordering(581) 00:17:21.020 fused_ordering(582) 00:17:21.020 fused_ordering(583) 00:17:21.020 fused_ordering(584) 00:17:21.020 fused_ordering(585) 00:17:21.020 fused_ordering(586) 00:17:21.020 fused_ordering(587) 00:17:21.020 fused_ordering(588) 00:17:21.020 fused_ordering(589) 00:17:21.020 fused_ordering(590) 00:17:21.020 fused_ordering(591) 00:17:21.020 fused_ordering(592) 00:17:21.020 fused_ordering(593) 00:17:21.020 fused_ordering(594) 00:17:21.020 fused_ordering(595) 00:17:21.020 fused_ordering(596) 00:17:21.020 fused_ordering(597) 00:17:21.020 fused_ordering(598) 00:17:21.020 fused_ordering(599) 00:17:21.020 fused_ordering(600) 00:17:21.020 fused_ordering(601) 00:17:21.020 fused_ordering(602) 00:17:21.020 fused_ordering(603) 00:17:21.020 fused_ordering(604) 00:17:21.020 fused_ordering(605) 00:17:21.020 fused_ordering(606) 00:17:21.020 fused_ordering(607) 00:17:21.020 fused_ordering(608) 00:17:21.020 fused_ordering(609) 00:17:21.020 fused_ordering(610) 00:17:21.020 fused_ordering(611) 00:17:21.020 fused_ordering(612) 00:17:21.020 fused_ordering(613) 00:17:21.020 fused_ordering(614) 00:17:21.020 fused_ordering(615) 00:17:21.281 fused_ordering(616) 00:17:21.281 fused_ordering(617) 00:17:21.281 fused_ordering(618) 00:17:21.281 fused_ordering(619) 00:17:21.281 fused_ordering(620) 00:17:21.281 fused_ordering(621) 00:17:21.281 fused_ordering(622) 00:17:21.281 fused_ordering(623) 00:17:21.281 fused_ordering(624) 00:17:21.281 fused_ordering(625) 00:17:21.281 fused_ordering(626) 00:17:21.281 fused_ordering(627) 00:17:21.281 fused_ordering(628) 00:17:21.281 fused_ordering(629) 00:17:21.281 fused_ordering(630) 00:17:21.281 fused_ordering(631) 00:17:21.281 fused_ordering(632) 00:17:21.281 fused_ordering(633) 00:17:21.281 fused_ordering(634) 00:17:21.281 fused_ordering(635) 00:17:21.281 fused_ordering(636) 00:17:21.281 fused_ordering(637) 00:17:21.281 fused_ordering(638) 00:17:21.281 fused_ordering(639) 00:17:21.281 fused_ordering(640) 00:17:21.281 fused_ordering(641) 00:17:21.281 fused_ordering(642) 00:17:21.281 fused_ordering(643) 00:17:21.281 fused_ordering(644) 00:17:21.281 fused_ordering(645) 00:17:21.281 fused_ordering(646) 00:17:21.281 fused_ordering(647) 00:17:21.281 fused_ordering(648) 00:17:21.281 fused_ordering(649) 00:17:21.281 fused_ordering(650) 00:17:21.281 fused_ordering(651) 00:17:21.281 fused_ordering(652) 00:17:21.281 fused_ordering(653) 00:17:21.281 fused_ordering(654) 00:17:21.281 fused_ordering(655) 00:17:21.281 fused_ordering(656) 00:17:21.281 fused_ordering(657) 00:17:21.281 fused_ordering(658) 00:17:21.281 fused_ordering(659) 00:17:21.281 fused_ordering(660) 00:17:21.281 fused_ordering(661) 00:17:21.281 fused_ordering(662) 00:17:21.281 fused_ordering(663) 00:17:21.281 fused_ordering(664) 00:17:21.281 fused_ordering(665) 00:17:21.281 fused_ordering(666) 00:17:21.281 fused_ordering(667) 00:17:21.281 fused_ordering(668) 00:17:21.281 fused_ordering(669) 00:17:21.281 fused_ordering(670) 00:17:21.281 fused_ordering(671) 00:17:21.281 fused_ordering(672) 00:17:21.281 fused_ordering(673) 00:17:21.281 fused_ordering(674) 00:17:21.281 fused_ordering(675) 00:17:21.281 fused_ordering(676) 00:17:21.281 fused_ordering(677) 00:17:21.281 fused_ordering(678) 00:17:21.281 fused_ordering(679) 00:17:21.281 fused_ordering(680) 00:17:21.281 fused_ordering(681) 00:17:21.281 fused_ordering(682) 00:17:21.281 fused_ordering(683) 00:17:21.281 fused_ordering(684) 00:17:21.281 fused_ordering(685) 00:17:21.281 fused_ordering(686) 00:17:21.281 fused_ordering(687) 00:17:21.281 fused_ordering(688) 00:17:21.281 fused_ordering(689) 00:17:21.281 fused_ordering(690) 00:17:21.281 fused_ordering(691) 00:17:21.281 fused_ordering(692) 00:17:21.281 fused_ordering(693) 00:17:21.281 fused_ordering(694) 00:17:21.281 fused_ordering(695) 00:17:21.281 fused_ordering(696) 00:17:21.281 fused_ordering(697) 00:17:21.281 fused_ordering(698) 00:17:21.281 fused_ordering(699) 00:17:21.281 fused_ordering(700) 00:17:21.281 fused_ordering(701) 00:17:21.281 fused_ordering(702) 00:17:21.281 fused_ordering(703) 00:17:21.281 fused_ordering(704) 00:17:21.281 fused_ordering(705) 00:17:21.281 fused_ordering(706) 00:17:21.281 fused_ordering(707) 00:17:21.281 fused_ordering(708) 00:17:21.281 fused_ordering(709) 00:17:21.281 fused_ordering(710) 00:17:21.281 fused_ordering(711) 00:17:21.281 fused_ordering(712) 00:17:21.281 fused_ordering(713) 00:17:21.281 fused_ordering(714) 00:17:21.281 fused_ordering(715) 00:17:21.281 fused_ordering(716) 00:17:21.281 fused_ordering(717) 00:17:21.281 fused_ordering(718) 00:17:21.281 fused_ordering(719) 00:17:21.281 fused_ordering(720) 00:17:21.281 fused_ordering(721) 00:17:21.281 fused_ordering(722) 00:17:21.281 fused_ordering(723) 00:17:21.281 fused_ordering(724) 00:17:21.281 fused_ordering(725) 00:17:21.281 fused_ordering(726) 00:17:21.281 fused_ordering(727) 00:17:21.281 fused_ordering(728) 00:17:21.281 fused_ordering(729) 00:17:21.281 fused_ordering(730) 00:17:21.281 fused_ordering(731) 00:17:21.281 fused_ordering(732) 00:17:21.281 fused_ordering(733) 00:17:21.281 fused_ordering(734) 00:17:21.281 fused_ordering(735) 00:17:21.281 fused_ordering(736) 00:17:21.281 fused_ordering(737) 00:17:21.281 fused_ordering(738) 00:17:21.281 fused_ordering(739) 00:17:21.281 fused_ordering(740) 00:17:21.281 fused_ordering(741) 00:17:21.281 fused_ordering(742) 00:17:21.281 fused_ordering(743) 00:17:21.281 fused_ordering(744) 00:17:21.281 fused_ordering(745) 00:17:21.281 fused_ordering(746) 00:17:21.281 fused_ordering(747) 00:17:21.281 fused_ordering(748) 00:17:21.281 fused_ordering(749) 00:17:21.281 fused_ordering(750) 00:17:21.281 fused_ordering(751) 00:17:21.281 fused_ordering(752) 00:17:21.281 fused_ordering(753) 00:17:21.281 fused_ordering(754) 00:17:21.281 fused_ordering(755) 00:17:21.281 fused_ordering(756) 00:17:21.281 fused_ordering(757) 00:17:21.281 fused_ordering(758) 00:17:21.281 fused_ordering(759) 00:17:21.281 fused_ordering(760) 00:17:21.281 fused_ordering(761) 00:17:21.281 fused_ordering(762) 00:17:21.281 fused_ordering(763) 00:17:21.281 fused_ordering(764) 00:17:21.281 fused_ordering(765) 00:17:21.281 fused_ordering(766) 00:17:21.281 fused_ordering(767) 00:17:21.281 fused_ordering(768) 00:17:21.281 fused_ordering(769) 00:17:21.281 fused_ordering(770) 00:17:21.281 fused_ordering(771) 00:17:21.281 fused_ordering(772) 00:17:21.282 fused_ordering(773) 00:17:21.282 fused_ordering(774) 00:17:21.282 fused_ordering(775) 00:17:21.282 fused_ordering(776) 00:17:21.282 fused_ordering(777) 00:17:21.282 fused_ordering(778) 00:17:21.282 fused_ordering(779) 00:17:21.282 fused_ordering(780) 00:17:21.282 fused_ordering(781) 00:17:21.282 fused_ordering(782) 00:17:21.282 fused_ordering(783) 00:17:21.282 fused_ordering(784) 00:17:21.282 fused_ordering(785) 00:17:21.282 fused_ordering(786) 00:17:21.282 fused_ordering(787) 00:17:21.282 fused_ordering(788) 00:17:21.282 fused_ordering(789) 00:17:21.282 fused_ordering(790) 00:17:21.282 fused_ordering(791) 00:17:21.282 fused_ordering(792) 00:17:21.282 fused_ordering(793) 00:17:21.282 fused_ordering(794) 00:17:21.282 fused_ordering(795) 00:17:21.282 fused_ordering(796) 00:17:21.282 fused_ordering(797) 00:17:21.282 fused_ordering(798) 00:17:21.282 fused_ordering(799) 00:17:21.282 fused_ordering(800) 00:17:21.282 fused_ordering(801) 00:17:21.282 fused_ordering(802) 00:17:21.282 fused_ordering(803) 00:17:21.282 fused_ordering(804) 00:17:21.282 fused_ordering(805) 00:17:21.282 fused_ordering(806) 00:17:21.282 fused_ordering(807) 00:17:21.282 fused_ordering(808) 00:17:21.282 fused_ordering(809) 00:17:21.282 fused_ordering(810) 00:17:21.282 fused_ordering(811) 00:17:21.282 fused_ordering(812) 00:17:21.282 fused_ordering(813) 00:17:21.282 fused_ordering(814) 00:17:21.282 fused_ordering(815) 00:17:21.282 fused_ordering(816) 00:17:21.282 fused_ordering(817) 00:17:21.282 fused_ordering(818) 00:17:21.282 fused_ordering(819) 00:17:21.282 fused_ordering(820) 00:17:22.221 fused_ordering(821) 00:17:22.221 fused_ordering(822) 00:17:22.221 fused_ordering(823) 00:17:22.221 fused_ordering(824) 00:17:22.221 fused_ordering(825) 00:17:22.221 fused_ordering(826) 00:17:22.221 fused_ordering(827) 00:17:22.221 fused_ordering(828) 00:17:22.221 fused_ordering(829) 00:17:22.221 fused_ordering(830) 00:17:22.221 fused_ordering(831) 00:17:22.221 fused_ordering(832) 00:17:22.221 fused_ordering(833) 00:17:22.221 fused_ordering(834) 00:17:22.221 fused_ordering(835) 00:17:22.221 fused_ordering(836) 00:17:22.221 fused_ordering(837) 00:17:22.221 fused_ordering(838) 00:17:22.221 fused_ordering(839) 00:17:22.221 fused_ordering(840) 00:17:22.221 fused_ordering(841) 00:17:22.221 fused_ordering(842) 00:17:22.221 fused_ordering(843) 00:17:22.221 fused_ordering(844) 00:17:22.221 fused_ordering(845) 00:17:22.221 fused_ordering(846) 00:17:22.221 fused_ordering(847) 00:17:22.221 fused_ordering(848) 00:17:22.221 fused_ordering(849) 00:17:22.221 fused_ordering(850) 00:17:22.221 fused_ordering(851) 00:17:22.221 fused_ordering(852) 00:17:22.221 fused_ordering(853) 00:17:22.221 fused_ordering(854) 00:17:22.221 fused_ordering(855) 00:17:22.221 fused_ordering(856) 00:17:22.221 fused_ordering(857) 00:17:22.221 fused_ordering(858) 00:17:22.221 fused_ordering(859) 00:17:22.221 fused_ordering(860) 00:17:22.221 fused_ordering(861) 00:17:22.221 fused_ordering(862) 00:17:22.221 fused_ordering(863) 00:17:22.221 fused_ordering(864) 00:17:22.221 fused_ordering(865) 00:17:22.221 fused_ordering(866) 00:17:22.221 fused_ordering(867) 00:17:22.221 fused_ordering(868) 00:17:22.221 fused_ordering(869) 00:17:22.221 fused_ordering(870) 00:17:22.221 fused_ordering(871) 00:17:22.221 fused_ordering(872) 00:17:22.221 fused_ordering(873) 00:17:22.221 fused_ordering(874) 00:17:22.221 fused_ordering(875) 00:17:22.221 fused_ordering(876) 00:17:22.221 fused_ordering(877) 00:17:22.221 fused_ordering(878) 00:17:22.221 fused_ordering(879) 00:17:22.221 fused_ordering(880) 00:17:22.221 fused_ordering(881) 00:17:22.221 fused_ordering(882) 00:17:22.221 fused_ordering(883) 00:17:22.221 fused_ordering(884) 00:17:22.221 fused_ordering(885) 00:17:22.221 fused_ordering(886) 00:17:22.221 fused_ordering(887) 00:17:22.221 fused_ordering(888) 00:17:22.221 fused_ordering(889) 00:17:22.221 fused_ordering(890) 00:17:22.221 fused_ordering(891) 00:17:22.221 fused_ordering(892) 00:17:22.221 fused_ordering(893) 00:17:22.221 fused_ordering(894) 00:17:22.221 fused_ordering(895) 00:17:22.221 fused_ordering(896) 00:17:22.221 fused_ordering(897) 00:17:22.221 fused_ordering(898) 00:17:22.221 fused_ordering(899) 00:17:22.221 fused_ordering(900) 00:17:22.221 fused_ordering(901) 00:17:22.221 fused_ordering(902) 00:17:22.221 fused_ordering(903) 00:17:22.221 fused_ordering(904) 00:17:22.221 fused_ordering(905) 00:17:22.221 fused_ordering(906) 00:17:22.221 fused_ordering(907) 00:17:22.221 fused_ordering(908) 00:17:22.221 fused_ordering(909) 00:17:22.221 fused_ordering(910) 00:17:22.221 fused_ordering(911) 00:17:22.221 fused_ordering(912) 00:17:22.221 fused_ordering(913) 00:17:22.221 fused_ordering(914) 00:17:22.221 fused_ordering(915) 00:17:22.221 fused_ordering(916) 00:17:22.221 fused_ordering(917) 00:17:22.221 fused_ordering(918) 00:17:22.221 fused_ordering(919) 00:17:22.221 fused_ordering(920) 00:17:22.221 fused_ordering(921) 00:17:22.221 fused_ordering(922) 00:17:22.221 fused_ordering(923) 00:17:22.221 fused_ordering(924) 00:17:22.221 fused_ordering(925) 00:17:22.221 fused_ordering(926) 00:17:22.221 fused_ordering(927) 00:17:22.221 fused_ordering(928) 00:17:22.221 fused_ordering(929) 00:17:22.221 fused_ordering(930) 00:17:22.221 fused_ordering(931) 00:17:22.221 fused_ordering(932) 00:17:22.221 fused_ordering(933) 00:17:22.222 fused_ordering(934) 00:17:22.222 fused_ordering(935) 00:17:22.222 fused_ordering(936) 00:17:22.222 fused_ordering(937) 00:17:22.222 fused_ordering(938) 00:17:22.222 fused_ordering(939) 00:17:22.222 fused_ordering(940) 00:17:22.222 fused_ordering(941) 00:17:22.222 fused_ordering(942) 00:17:22.222 fused_ordering(943) 00:17:22.222 fused_ordering(944) 00:17:22.222 fused_ordering(945) 00:17:22.222 fused_ordering(946) 00:17:22.222 fused_ordering(947) 00:17:22.222 fused_ordering(948) 00:17:22.222 fused_ordering(949) 00:17:22.222 fused_ordering(950) 00:17:22.222 fused_ordering(951) 00:17:22.222 fused_ordering(952) 00:17:22.222 fused_ordering(953) 00:17:22.222 fused_ordering(954) 00:17:22.222 fused_ordering(955) 00:17:22.222 fused_ordering(956) 00:17:22.222 fused_ordering(957) 00:17:22.222 fused_ordering(958) 00:17:22.222 fused_ordering(959) 00:17:22.222 fused_ordering(960) 00:17:22.222 fused_ordering(961) 00:17:22.222 fused_ordering(962) 00:17:22.222 fused_ordering(963) 00:17:22.222 fused_ordering(964) 00:17:22.222 fused_ordering(965) 00:17:22.222 fused_ordering(966) 00:17:22.222 fused_ordering(967) 00:17:22.222 fused_ordering(968) 00:17:22.222 fused_ordering(969) 00:17:22.222 fused_ordering(970) 00:17:22.222 fused_ordering(971) 00:17:22.222 fused_ordering(972) 00:17:22.222 fused_ordering(973) 00:17:22.222 fused_ordering(974) 00:17:22.222 fused_ordering(975) 00:17:22.222 fused_ordering(976) 00:17:22.222 fused_ordering(977) 00:17:22.222 fused_ordering(978) 00:17:22.222 fused_ordering(979) 00:17:22.222 fused_ordering(980) 00:17:22.222 fused_ordering(981) 00:17:22.222 fused_ordering(982) 00:17:22.222 fused_ordering(983) 00:17:22.222 fused_ordering(984) 00:17:22.222 fused_ordering(985) 00:17:22.222 fused_ordering(986) 00:17:22.222 fused_ordering(987) 00:17:22.222 fused_ordering(988) 00:17:22.222 fused_ordering(989) 00:17:22.222 fused_ordering(990) 00:17:22.222 fused_ordering(991) 00:17:22.222 fused_ordering(992) 00:17:22.222 fused_ordering(993) 00:17:22.222 fused_ordering(994) 00:17:22.222 fused_ordering(995) 00:17:22.222 fused_ordering(996) 00:17:22.222 fused_ordering(997) 00:17:22.222 fused_ordering(998) 00:17:22.222 fused_ordering(999) 00:17:22.222 fused_ordering(1000) 00:17:22.222 fused_ordering(1001) 00:17:22.222 fused_ordering(1002) 00:17:22.222 fused_ordering(1003) 00:17:22.222 fused_ordering(1004) 00:17:22.222 fused_ordering(1005) 00:17:22.222 fused_ordering(1006) 00:17:22.222 fused_ordering(1007) 00:17:22.222 fused_ordering(1008) 00:17:22.222 fused_ordering(1009) 00:17:22.222 fused_ordering(1010) 00:17:22.222 fused_ordering(1011) 00:17:22.222 fused_ordering(1012) 00:17:22.222 fused_ordering(1013) 00:17:22.222 fused_ordering(1014) 00:17:22.222 fused_ordering(1015) 00:17:22.222 fused_ordering(1016) 00:17:22.222 fused_ordering(1017) 00:17:22.222 fused_ordering(1018) 00:17:22.222 fused_ordering(1019) 00:17:22.222 fused_ordering(1020) 00:17:22.222 fused_ordering(1021) 00:17:22.222 fused_ordering(1022) 00:17:22.222 fused_ordering(1023) 00:17:22.222 21:18:16 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:22.222 21:18:16 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:22.222 21:18:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:22.222 21:18:16 -- nvmf/common.sh@117 -- # sync 00:17:22.222 21:18:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.222 21:18:16 -- nvmf/common.sh@120 -- # set +e 00:17:22.222 21:18:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.222 21:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.222 rmmod nvme_tcp 00:17:22.222 rmmod nvme_fabrics 00:17:22.222 rmmod nvme_keyring 00:17:22.222 21:18:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.222 21:18:16 -- nvmf/common.sh@124 -- # set -e 00:17:22.222 21:18:16 -- nvmf/common.sh@125 -- # return 0 00:17:22.222 21:18:16 -- nvmf/common.sh@478 -- # '[' -n 1405191 ']' 00:17:22.222 21:18:16 -- nvmf/common.sh@479 -- # killprocess 1405191 00:17:22.222 21:18:16 -- common/autotest_common.sh@936 -- # '[' -z 1405191 ']' 00:17:22.222 21:18:16 -- common/autotest_common.sh@940 -- # kill -0 1405191 00:17:22.222 21:18:16 -- common/autotest_common.sh@941 -- # uname 00:17:22.222 21:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.222 21:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1405191 00:17:22.222 21:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.222 21:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.222 21:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1405191' 00:17:22.222 killing process with pid 1405191 00:17:22.222 21:18:16 -- common/autotest_common.sh@955 -- # kill 1405191 00:17:22.222 21:18:16 -- common/autotest_common.sh@960 -- # wait 1405191 00:17:22.481 21:18:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:22.481 21:18:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:22.481 21:18:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:22.481 21:18:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.481 21:18:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.481 21:18:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.481 21:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.481 21:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.024 21:18:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.024 00:17:25.024 real 0m11.332s 00:17:25.024 user 0m6.458s 00:17:25.024 sys 0m5.614s 00:17:25.024 21:18:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.024 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:25.024 ************************************ 00:17:25.024 END TEST nvmf_fused_ordering 00:17:25.024 ************************************ 00:17:25.024 21:18:18 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:25.024 21:18:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:25.024 21:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.024 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:25.024 ************************************ 00:17:25.024 START TEST nvmf_delete_subsystem 00:17:25.024 ************************************ 00:17:25.024 21:18:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:25.024 * Looking for test storage... 00:17:25.024 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:25.024 21:18:18 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.024 21:18:18 -- nvmf/common.sh@7 -- # uname -s 00:17:25.024 21:18:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.024 21:18:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.024 21:18:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.024 21:18:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.024 21:18:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.024 21:18:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.024 21:18:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.024 21:18:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.024 21:18:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.024 21:18:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.024 21:18:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:25.024 21:18:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:25.024 21:18:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.024 21:18:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.024 21:18:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:25.024 21:18:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.024 21:18:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:25.025 21:18:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.025 21:18:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.025 21:18:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.025 21:18:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.025 21:18:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.025 21:18:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.025 21:18:18 -- paths/export.sh@5 -- # export PATH 00:17:25.025 21:18:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.025 21:18:18 -- nvmf/common.sh@47 -- # : 0 00:17:25.025 21:18:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.025 21:18:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.025 21:18:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.025 21:18:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.025 21:18:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.025 21:18:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.025 21:18:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.025 21:18:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.025 21:18:18 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:25.025 21:18:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:25.025 21:18:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.025 21:18:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:25.025 21:18:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:25.025 21:18:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:25.025 21:18:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.025 21:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.025 21:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.025 21:18:18 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:25.025 21:18:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:25.025 21:18:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.025 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:30.302 21:18:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:30.302 21:18:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.302 21:18:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.302 21:18:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.302 21:18:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.302 21:18:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.302 21:18:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.302 21:18:23 -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.302 21:18:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.302 21:18:23 -- nvmf/common.sh@296 -- # e810=() 00:17:30.302 21:18:23 -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.302 21:18:23 -- nvmf/common.sh@297 -- # x722=() 00:17:30.302 21:18:23 -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.302 21:18:23 -- nvmf/common.sh@298 -- # mlx=() 00:17:30.302 21:18:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.302 21:18:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.302 21:18:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.302 21:18:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.302 21:18:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:30.302 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:30.302 21:18:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.302 21:18:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:30.302 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:30.302 21:18:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.302 21:18:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.302 21:18:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.302 21:18:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:30.302 Found net devices under 0000:27:00.0: cvl_0_0 00:17:30.302 21:18:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.302 21:18:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.302 21:18:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.302 21:18:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.302 21:18:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:30.302 Found net devices under 0000:27:00.1: cvl_0_1 00:17:30.302 21:18:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.302 21:18:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:30.302 21:18:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:30.302 21:18:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:30.302 21:18:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.302 21:18:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.302 21:18:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.302 21:18:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.302 21:18:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.302 21:18:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.302 21:18:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.302 21:18:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.302 21:18:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.302 21:18:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.302 21:18:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.302 21:18:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.302 21:18:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.302 21:18:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.302 21:18:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.302 21:18:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.302 21:18:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.302 21:18:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.302 21:18:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.302 21:18:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.730 ms 00:17:30.302 00:17:30.302 --- 10.0.0.2 ping statistics --- 00:17:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.303 rtt min/avg/max/mdev = 0.730/0.730/0.730/0.000 ms 00:17:30.303 21:18:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.541 ms 00:17:30.303 00:17:30.303 --- 10.0.0.1 ping statistics --- 00:17:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.303 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:17:30.303 21:18:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.303 21:18:24 -- nvmf/common.sh@411 -- # return 0 00:17:30.303 21:18:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:30.303 21:18:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.303 21:18:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:30.303 21:18:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:30.303 21:18:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.303 21:18:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:30.303 21:18:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:30.303 21:18:24 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:30.303 21:18:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:30.303 21:18:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:30.303 21:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:30.303 21:18:24 -- nvmf/common.sh@470 -- # nvmfpid=1409708 00:17:30.303 21:18:24 -- nvmf/common.sh@471 -- # waitforlisten 1409708 00:17:30.303 21:18:24 -- common/autotest_common.sh@817 -- # '[' -z 1409708 ']' 00:17:30.303 21:18:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.303 21:18:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.303 21:18:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.303 21:18:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.303 21:18:24 -- common/autotest_common.sh@10 -- # set +x 00:17:30.303 21:18:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:30.303 [2024-04-23 21:18:24.366616] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:17:30.303 [2024-04-23 21:18:24.366736] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.303 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.303 [2024-04-23 21:18:24.495274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:30.562 [2024-04-23 21:18:24.598326] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.562 [2024-04-23 21:18:24.598365] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.562 [2024-04-23 21:18:24.598375] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.562 [2024-04-23 21:18:24.598385] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.562 [2024-04-23 21:18:24.598392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.562 [2024-04-23 21:18:24.598463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.562 [2024-04-23 21:18:24.598472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.820 21:18:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:30.820 21:18:25 -- common/autotest_common.sh@850 -- # return 0 00:17:30.820 21:18:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:30.820 21:18:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:30.820 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:30.820 21:18:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.820 21:18:25 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.820 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.820 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:30.820 [2024-04-23 21:18:25.091909] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.079 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.079 21:18:25 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:31.079 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.079 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:31.079 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.080 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.080 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:31.080 [2024-04-23 21:18:25.108087] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.080 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:31.080 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.080 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:31.080 NULL1 00:17:31.080 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:31.080 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.080 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:31.080 Delay0 00:17:31.080 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:31.080 21:18:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.080 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:17:31.080 21:18:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@28 -- # perf_pid=1410017 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:31.080 21:18:25 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:31.080 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.080 [2024-04-23 21:18:25.222861] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:32.988 21:18:27 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.988 21:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.988 21:18:27 -- common/autotest_common.sh@10 -- # set +x 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 starting I/O failed: -6 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 [2024-04-23 21:18:27.314104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002840 is same with the state(5) to be set 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Write completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.249 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 [2024-04-23 21:18:27.314772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 starting I/O failed: -6 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 [2024-04-23 21:18:27.315343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Write completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:33.250 Read completed with error (sct=0, sc=8) 00:17:34.191 [2024-04-23 21:18:28.283028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 [2024-04-23 21:18:28.315593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 [2024-04-23 21:18:28.316098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 [2024-04-23 21:18:28.316764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 Write completed with error (sct=0, sc=8) 00:17:34.192 Read completed with error (sct=0, sc=8) 00:17:34.192 [2024-04-23 21:18:28.316975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:17:34.192 [2024-04-23 21:18:28.319230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:17:34.192 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:34.192 21:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.192 21:18:28 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:34.192 21:18:28 -- target/delete_subsystem.sh@35 -- # kill -0 1410017 00:17:34.192 21:18:28 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:34.192 Initializing NVMe Controllers 00:17:34.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.192 Controller IO queue size 128, less than required. 00:17:34.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:34.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:34.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:34.192 Initialization complete. Launching workers. 00:17:34.192 ======================================================== 00:17:34.192 Latency(us) 00:17:34.192 Device Information : IOPS MiB/s Average min max 00:17:34.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.41 0.08 894451.47 683.01 1011616.60 00:17:34.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.94 0.08 960321.50 480.12 2002354.16 00:17:34.192 ======================================================== 00:17:34.192 Total : 335.35 0.16 926849.77 480.12 2002354.16 00:17:34.192 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@35 -- # kill -0 1410017 00:17:34.761 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1410017) - No such process 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@45 -- # NOT wait 1410017 00:17:34.761 21:18:28 -- common/autotest_common.sh@638 -- # local es=0 00:17:34.761 21:18:28 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1410017 00:17:34.761 21:18:28 -- common/autotest_common.sh@626 -- # local arg=wait 00:17:34.761 21:18:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.761 21:18:28 -- common/autotest_common.sh@630 -- # type -t wait 00:17:34.761 21:18:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.761 21:18:28 -- common/autotest_common.sh@641 -- # wait 1410017 00:17:34.761 21:18:28 -- common/autotest_common.sh@641 -- # es=1 00:17:34.761 21:18:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:34.761 21:18:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:34.761 21:18:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:34.761 21:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.761 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.761 21:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.761 21:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.761 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.761 [2024-04-23 21:18:28.843448] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.761 21:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.761 21:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.761 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:17:34.761 21:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@54 -- # perf_pid=1410614 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:34.761 21:18:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:34.761 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.761 [2024-04-23 21:18:28.937399] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:35.338 21:18:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.338 21:18:29 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:35.338 21:18:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:35.598 21:18:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.598 21:18:29 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:35.598 21:18:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.166 21:18:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.166 21:18:30 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:36.166 21:18:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.734 21:18:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.734 21:18:30 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:36.734 21:18:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:37.305 21:18:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:37.305 21:18:31 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:37.305 21:18:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:37.873 21:18:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:37.873 21:18:31 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:37.873 21:18:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:37.873 Initializing NVMe Controllers 00:17:37.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.873 Controller IO queue size 128, less than required. 00:17:37.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:37.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:37.873 Initialization complete. Launching workers. 00:17:37.873 ======================================================== 00:17:37.873 Latency(us) 00:17:37.873 Device Information : IOPS MiB/s Average min max 00:17:37.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004785.15 1000202.27 1010590.55 00:17:37.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003969.62 1000241.20 1013297.46 00:17:37.873 ======================================================== 00:17:37.873 Total : 256.00 0.12 1004377.38 1000202.27 1013297.46 00:17:37.873 00:17:38.132 21:18:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:38.132 21:18:32 -- target/delete_subsystem.sh@57 -- # kill -0 1410614 00:17:38.132 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1410614) - No such process 00:17:38.132 21:18:32 -- target/delete_subsystem.sh@67 -- # wait 1410614 00:17:38.132 21:18:32 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:38.132 21:18:32 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:38.132 21:18:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:38.132 21:18:32 -- nvmf/common.sh@117 -- # sync 00:17:38.132 21:18:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.132 21:18:32 -- nvmf/common.sh@120 -- # set +e 00:17:38.132 21:18:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.132 21:18:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.132 rmmod nvme_tcp 00:17:38.391 rmmod nvme_fabrics 00:17:38.391 rmmod nvme_keyring 00:17:38.391 21:18:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.391 21:18:32 -- nvmf/common.sh@124 -- # set -e 00:17:38.391 21:18:32 -- nvmf/common.sh@125 -- # return 0 00:17:38.391 21:18:32 -- nvmf/common.sh@478 -- # '[' -n 1409708 ']' 00:17:38.391 21:18:32 -- nvmf/common.sh@479 -- # killprocess 1409708 00:17:38.391 21:18:32 -- common/autotest_common.sh@936 -- # '[' -z 1409708 ']' 00:17:38.391 21:18:32 -- common/autotest_common.sh@940 -- # kill -0 1409708 00:17:38.391 21:18:32 -- common/autotest_common.sh@941 -- # uname 00:17:38.391 21:18:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.391 21:18:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1409708 00:17:38.391 21:18:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:38.391 21:18:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:38.391 21:18:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1409708' 00:17:38.391 killing process with pid 1409708 00:17:38.391 21:18:32 -- common/autotest_common.sh@955 -- # kill 1409708 00:17:38.391 21:18:32 -- common/autotest_common.sh@960 -- # wait 1409708 00:17:38.962 21:18:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:38.963 21:18:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:38.963 21:18:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:38.963 21:18:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.963 21:18:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.963 21:18:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.963 21:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.963 21:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.870 21:18:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.870 00:17:40.870 real 0m16.161s 00:17:40.870 user 0m30.089s 00:17:40.870 sys 0m4.784s 00:17:40.870 21:18:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:40.870 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:17:40.870 ************************************ 00:17:40.870 END TEST nvmf_delete_subsystem 00:17:40.870 ************************************ 00:17:40.870 21:18:35 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:40.870 21:18:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:40.870 21:18:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.870 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:17:41.130 ************************************ 00:17:41.130 START TEST nvmf_ns_masking 00:17:41.130 ************************************ 00:17:41.130 21:18:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:41.130 * Looking for test storage... 00:17:41.130 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:41.130 21:18:35 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.130 21:18:35 -- nvmf/common.sh@7 -- # uname -s 00:17:41.130 21:18:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.130 21:18:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.130 21:18:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.130 21:18:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.130 21:18:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.130 21:18:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.130 21:18:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.130 21:18:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.130 21:18:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.130 21:18:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.130 21:18:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:41.130 21:18:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:17:41.130 21:18:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.130 21:18:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.130 21:18:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:41.130 21:18:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.130 21:18:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:41.130 21:18:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.130 21:18:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.130 21:18:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.130 21:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.130 21:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.130 21:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.130 21:18:35 -- paths/export.sh@5 -- # export PATH 00:17:41.130 21:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.130 21:18:35 -- nvmf/common.sh@47 -- # : 0 00:17:41.130 21:18:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.130 21:18:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.130 21:18:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.130 21:18:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.130 21:18:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.130 21:18:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.130 21:18:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.130 21:18:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.130 21:18:35 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:41.130 21:18:35 -- target/ns_masking.sh@11 -- # loops=5 00:17:41.130 21:18:35 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:41.130 21:18:35 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:17:41.130 21:18:35 -- target/ns_masking.sh@15 -- # uuidgen 00:17:41.130 21:18:35 -- target/ns_masking.sh@15 -- # HOSTID=c921eaf9-2bfd-4cbb-be11-a06ef6f6bd60 00:17:41.130 21:18:35 -- target/ns_masking.sh@44 -- # nvmftestinit 00:17:41.130 21:18:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:41.130 21:18:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.130 21:18:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:41.130 21:18:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:41.130 21:18:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:41.130 21:18:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.130 21:18:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.130 21:18:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.130 21:18:35 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:41.130 21:18:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:41.130 21:18:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.130 21:18:35 -- common/autotest_common.sh@10 -- # set +x 00:17:46.409 21:18:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:46.409 21:18:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.409 21:18:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.409 21:18:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.409 21:18:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.409 21:18:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.409 21:18:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.409 21:18:40 -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.409 21:18:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.409 21:18:40 -- nvmf/common.sh@296 -- # e810=() 00:17:46.409 21:18:40 -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.409 21:18:40 -- nvmf/common.sh@297 -- # x722=() 00:17:46.409 21:18:40 -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.409 21:18:40 -- nvmf/common.sh@298 -- # mlx=() 00:17:46.409 21:18:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.409 21:18:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.409 21:18:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.409 21:18:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.409 21:18:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:46.409 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:46.409 21:18:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.409 21:18:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:46.409 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:46.409 21:18:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.409 21:18:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.409 21:18:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.409 21:18:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:46.409 Found net devices under 0000:27:00.0: cvl_0_0 00:17:46.409 21:18:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.409 21:18:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.409 21:18:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.409 21:18:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.409 21:18:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:46.409 Found net devices under 0000:27:00.1: cvl_0_1 00:17:46.409 21:18:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.409 21:18:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:46.409 21:18:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:46.409 21:18:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:46.409 21:18:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.409 21:18:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.409 21:18:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.409 21:18:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.409 21:18:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.409 21:18:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.409 21:18:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.409 21:18:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.409 21:18:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.409 21:18:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.409 21:18:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.409 21:18:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.409 21:18:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.409 21:18:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.409 21:18:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.409 21:18:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.409 21:18:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.409 21:18:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.409 21:18:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.409 21:18:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:17:46.409 00:17:46.409 --- 10.0.0.2 ping statistics --- 00:17:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.409 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:17:46.410 21:18:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.524 ms 00:17:46.410 00:17:46.410 --- 10.0.0.1 ping statistics --- 00:17:46.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.410 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:17:46.410 21:18:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.410 21:18:40 -- nvmf/common.sh@411 -- # return 0 00:17:46.410 21:18:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:46.410 21:18:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.410 21:18:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:46.410 21:18:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:46.410 21:18:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.410 21:18:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:46.410 21:18:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:46.410 21:18:40 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:17:46.410 21:18:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:46.410 21:18:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:46.410 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.410 21:18:40 -- nvmf/common.sh@470 -- # nvmfpid=1415284 00:17:46.410 21:18:40 -- nvmf/common.sh@471 -- # waitforlisten 1415284 00:17:46.410 21:18:40 -- common/autotest_common.sh@817 -- # '[' -z 1415284 ']' 00:17:46.410 21:18:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.410 21:18:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:46.410 21:18:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.410 21:18:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:46.410 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:17:46.410 21:18:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.410 [2024-04-23 21:18:40.449068] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:17:46.410 [2024-04-23 21:18:40.449180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.410 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.410 [2024-04-23 21:18:40.574712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.410 [2024-04-23 21:18:40.673276] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.410 [2024-04-23 21:18:40.673321] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.410 [2024-04-23 21:18:40.673333] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.410 [2024-04-23 21:18:40.673341] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.410 [2024-04-23 21:18:40.673349] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.410 [2024-04-23 21:18:40.673437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.410 [2024-04-23 21:18:40.673540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.410 [2024-04-23 21:18:40.673648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.410 [2024-04-23 21:18:40.673659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.981 21:18:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.981 21:18:41 -- common/autotest_common.sh@850 -- # return 0 00:17:46.981 21:18:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:46.981 21:18:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.981 21:18:41 -- common/autotest_common.sh@10 -- # set +x 00:17:46.981 21:18:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.981 21:18:41 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.242 [2024-04-23 21:18:41.332895] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.242 21:18:41 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:17:47.242 21:18:41 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:17:47.242 21:18:41 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:47.502 Malloc1 00:17:47.502 21:18:41 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:47.502 Malloc2 00:17:47.502 21:18:41 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:47.761 21:18:41 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:47.761 21:18:42 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.022 [2024-04-23 21:18:42.163320] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.022 21:18:42 -- target/ns_masking.sh@61 -- # connect 00:17:48.022 21:18:42 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c921eaf9-2bfd-4cbb-be11-a06ef6f6bd60 -a 10.0.0.2 -s 4420 -i 4 00:17:48.022 21:18:42 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.022 21:18:42 -- common/autotest_common.sh@1184 -- # local i=0 00:17:48.022 21:18:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.022 21:18:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:48.022 21:18:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:50.559 21:18:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:50.559 21:18:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:50.559 21:18:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.559 21:18:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:50.559 21:18:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.559 21:18:44 -- common/autotest_common.sh@1194 -- # return 0 00:17:50.559 21:18:44 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:50.559 21:18:44 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.559 21:18:44 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:50.559 21:18:44 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:50.559 21:18:44 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:17:50.559 21:18:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:50.559 21:18:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:50.559 [ 0]:0x1 00:17:50.559 21:18:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.559 21:18:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:50.559 21:18:44 -- target/ns_masking.sh@40 -- # nguid=2e7bc319513d45ec83c17b22103259d4 00:17:50.559 21:18:44 -- target/ns_masking.sh@41 -- # [[ 2e7bc319513d45ec83c17b22103259d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.559 21:18:44 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:50.559 21:18:44 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:17:50.559 21:18:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:50.560 21:18:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:50.560 [ 0]:0x1 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # nguid=2e7bc319513d45ec83c17b22103259d4 00:17:50.560 21:18:44 -- target/ns_masking.sh@41 -- # [[ 2e7bc319513d45ec83c17b22103259d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.560 21:18:44 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:17:50.560 21:18:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:50.560 21:18:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:50.560 [ 1]:0x2 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:50.560 21:18:44 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:50.560 21:18:44 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.560 21:18:44 -- target/ns_masking.sh@69 -- # disconnect 00:17:50.560 21:18:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.818 21:18:44 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:50.818 21:18:45 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:51.078 21:18:45 -- target/ns_masking.sh@77 -- # connect 1 00:17:51.078 21:18:45 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c921eaf9-2bfd-4cbb-be11-a06ef6f6bd60 -a 10.0.0.2 -s 4420 -i 4 00:17:51.078 21:18:45 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:51.078 21:18:45 -- common/autotest_common.sh@1184 -- # local i=0 00:17:51.078 21:18:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.078 21:18:45 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:17:51.078 21:18:45 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:17:51.078 21:18:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:53.622 21:18:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:53.622 21:18:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:53.622 21:18:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.623 21:18:47 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:53.623 21:18:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.623 21:18:47 -- common/autotest_common.sh@1194 -- # return 0 00:17:53.623 21:18:47 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:53.623 21:18:47 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:53.623 21:18:47 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:53.623 21:18:47 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:53.623 21:18:47 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:17:53.623 21:18:47 -- common/autotest_common.sh@638 -- # local es=0 00:17:53.623 21:18:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:17:53.623 21:18:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:17:53.623 21:18:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:53.623 21:18:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:17:53.623 21:18:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:53.623 21:18:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:53.623 21:18:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.623 21:18:47 -- common/autotest_common.sh@641 -- # es=1 00:17:53.623 21:18:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:53.623 21:18:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:53.623 21:18:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:53.623 21:18:47 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.623 [ 0]:0x2 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:53.623 21:18:47 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.623 21:18:47 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:53.623 21:18:47 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:53.623 [ 0]:0x1 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nguid=2e7bc319513d45ec83c17b22103259d4 00:17:53.623 21:18:47 -- target/ns_masking.sh@41 -- # [[ 2e7bc319513d45ec83c17b22103259d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.623 21:18:47 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.623 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:53.623 [ 1]:0x2 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.623 21:18:47 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:53.623 21:18:47 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.623 21:18:47 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:53.882 21:18:47 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:17:53.882 21:18:47 -- common/autotest_common.sh@638 -- # local es=0 00:17:53.882 21:18:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:17:53.882 21:18:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:17:53.883 21:18:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:53.883 21:18:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:17:53.883 21:18:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:53.883 21:18:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:17:53.883 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.883 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:53.883 21:18:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.883 21:18:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.883 21:18:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:53.883 21:18:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.883 21:18:47 -- common/autotest_common.sh@641 -- # es=1 00:17:53.883 21:18:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:53.883 21:18:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:53.883 21:18:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:53.883 21:18:47 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:17:53.883 21:18:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:53.883 21:18:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:53.883 [ 0]:0x2 00:17:53.883 21:18:48 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:53.883 21:18:48 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:53.883 21:18:48 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:53.883 21:18:48 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:53.883 21:18:48 -- target/ns_masking.sh@91 -- # disconnect 00:17:53.883 21:18:48 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.883 21:18:48 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.144 21:18:48 -- target/ns_masking.sh@95 -- # connect 2 00:17:54.144 21:18:48 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c921eaf9-2bfd-4cbb-be11-a06ef6f6bd60 -a 10.0.0.2 -s 4420 -i 4 00:17:54.403 21:18:48 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:54.403 21:18:48 -- common/autotest_common.sh@1184 -- # local i=0 00:17:54.403 21:18:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.403 21:18:48 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:17:54.403 21:18:48 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:17:54.403 21:18:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:56.372 21:18:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:56.372 21:18:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:56.372 21:18:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.372 21:18:50 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:17:56.372 21:18:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.372 21:18:50 -- common/autotest_common.sh@1194 -- # return 0 00:17:56.372 21:18:50 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:17:56.372 21:18:50 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:56.372 21:18:50 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:17:56.372 21:18:50 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:17:56.372 21:18:50 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:17:56.372 21:18:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:56.372 21:18:50 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:56.632 [ 0]:0x1 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # nguid=2e7bc319513d45ec83c17b22103259d4 00:17:56.632 21:18:50 -- target/ns_masking.sh@41 -- # [[ 2e7bc319513d45ec83c17b22103259d4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.632 21:18:50 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:17:56.632 21:18:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:56.632 21:18:50 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:56.632 [ 1]:0x2 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:56.632 21:18:50 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:56.632 21:18:50 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.632 21:18:50 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:56.893 21:18:50 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:17:56.893 21:18:50 -- common/autotest_common.sh@638 -- # local es=0 00:17:56.893 21:18:50 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:17:56.893 21:18:50 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:17:56.893 21:18:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:50 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:17:56.893 21:18:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:50 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:17:56.893 21:18:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:56.893 21:18:50 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:56.893 21:18:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:56.893 21:18:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:56.893 21:18:50 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:56.893 21:18:50 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.893 21:18:50 -- common/autotest_common.sh@641 -- # es=1 00:17:56.893 21:18:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:56.893 21:18:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:56.893 21:18:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:56.893 21:18:50 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:17:56.893 21:18:50 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:56.893 21:18:50 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:56.893 [ 0]:0x2 00:17:56.893 21:18:50 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:56.893 21:18:50 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:56.893 21:18:51 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:56.893 21:18:51 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:56.893 21:18:51 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:56.893 21:18:51 -- common/autotest_common.sh@638 -- # local es=0 00:17:56.893 21:18:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:56.893 21:18:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:56.893 21:18:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:56.893 21:18:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:56.893 [2024-04-23 21:18:51.136875] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:56.893 request: 00:17:56.893 { 00:17:56.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.893 "nsid": 2, 00:17:56.893 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.893 "method": "nvmf_ns_remove_host", 00:17:56.893 "req_id": 1 00:17:56.893 } 00:17:56.893 Got JSON-RPC error response 00:17:56.893 response: 00:17:56.893 { 00:17:56.893 "code": -32602, 00:17:56.893 "message": "Invalid parameters" 00:17:56.893 } 00:17:56.893 21:18:51 -- common/autotest_common.sh@641 -- # es=1 00:17:56.893 21:18:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:56.893 21:18:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:56.893 21:18:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:56.893 21:18:51 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:17:56.893 21:18:51 -- common/autotest_common.sh@638 -- # local es=0 00:17:56.893 21:18:51 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:17:56.893 21:18:51 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:17:56.893 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.893 21:18:51 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:17:56.893 21:18:51 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:56.893 21:18:51 -- target/ns_masking.sh@39 -- # grep 0x1 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:17:57.155 21:18:51 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.155 21:18:51 -- common/autotest_common.sh@641 -- # es=1 00:17:57.155 21:18:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:57.155 21:18:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:57.155 21:18:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:57.155 21:18:51 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:17:57.155 21:18:51 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:17:57.155 21:18:51 -- target/ns_masking.sh@39 -- # grep 0x2 00:17:57.155 [ 0]:0x2 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:17:57.155 21:18:51 -- target/ns_masking.sh@40 -- # nguid=bba20533af314f168b490ced00eea60e 00:17:57.155 21:18:51 -- target/ns_masking.sh@41 -- # [[ bba20533af314f168b490ced00eea60e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.155 21:18:51 -- target/ns_masking.sh@108 -- # disconnect 00:17:57.155 21:18:51 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.155 21:18:51 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.417 21:18:51 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:57.417 21:18:51 -- target/ns_masking.sh@114 -- # nvmftestfini 00:17:57.417 21:18:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:57.417 21:18:51 -- nvmf/common.sh@117 -- # sync 00:17:57.417 21:18:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:57.417 21:18:51 -- nvmf/common.sh@120 -- # set +e 00:17:57.417 21:18:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:57.417 21:18:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:57.417 rmmod nvme_tcp 00:17:57.417 rmmod nvme_fabrics 00:17:57.417 rmmod nvme_keyring 00:17:57.417 21:18:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:57.417 21:18:51 -- nvmf/common.sh@124 -- # set -e 00:17:57.417 21:18:51 -- nvmf/common.sh@125 -- # return 0 00:17:57.417 21:18:51 -- nvmf/common.sh@478 -- # '[' -n 1415284 ']' 00:17:57.417 21:18:51 -- nvmf/common.sh@479 -- # killprocess 1415284 00:17:57.417 21:18:51 -- common/autotest_common.sh@936 -- # '[' -z 1415284 ']' 00:17:57.417 21:18:51 -- common/autotest_common.sh@940 -- # kill -0 1415284 00:17:57.417 21:18:51 -- common/autotest_common.sh@941 -- # uname 00:17:57.417 21:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.417 21:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1415284 00:17:57.417 21:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.417 21:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.417 21:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1415284' 00:17:57.417 killing process with pid 1415284 00:17:57.417 21:18:51 -- common/autotest_common.sh@955 -- # kill 1415284 00:17:57.417 21:18:51 -- common/autotest_common.sh@960 -- # wait 1415284 00:17:57.987 21:18:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:57.987 21:18:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:57.987 21:18:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:57.987 21:18:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.987 21:18:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.987 21:18:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.987 21:18:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.987 21:18:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.526 21:18:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.527 00:18:00.527 real 0m19.048s 00:18:00.527 user 0m48.965s 00:18:00.527 sys 0m5.151s 00:18:00.527 21:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:00.527 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 ************************************ 00:18:00.527 END TEST nvmf_ns_masking 00:18:00.527 ************************************ 00:18:00.527 21:18:54 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:18:00.527 21:18:54 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:18:00.527 21:18:54 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:00.527 21:18:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:00.527 21:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.527 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:00.527 ************************************ 00:18:00.527 START TEST nvmf_host_management 00:18:00.527 ************************************ 00:18:00.527 21:18:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:00.527 * Looking for test storage... 00:18:00.527 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:00.527 21:18:54 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.527 21:18:54 -- nvmf/common.sh@7 -- # uname -s 00:18:00.527 21:18:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.527 21:18:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.527 21:18:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.527 21:18:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.527 21:18:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.527 21:18:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.527 21:18:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.527 21:18:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.527 21:18:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.527 21:18:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.527 21:18:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:00.527 21:18:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:00.527 21:18:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.527 21:18:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.527 21:18:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:00.527 21:18:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.527 21:18:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:00.527 21:18:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.527 21:18:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.527 21:18:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.527 21:18:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.527 21:18:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.527 21:18:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.527 21:18:54 -- paths/export.sh@5 -- # export PATH 00:18:00.527 21:18:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.527 21:18:54 -- nvmf/common.sh@47 -- # : 0 00:18:00.527 21:18:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:00.527 21:18:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:00.527 21:18:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.527 21:18:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.527 21:18:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.527 21:18:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:00.527 21:18:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:00.527 21:18:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:00.527 21:18:54 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.527 21:18:54 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.527 21:18:54 -- target/host_management.sh@105 -- # nvmftestinit 00:18:00.527 21:18:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:00.527 21:18:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.527 21:18:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:00.527 21:18:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:00.527 21:18:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:00.527 21:18:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.527 21:18:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.527 21:18:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.527 21:18:54 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:00.527 21:18:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:00.527 21:18:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:00.527 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:05.804 21:18:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:05.804 21:18:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.804 21:18:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.804 21:18:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.804 21:18:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.804 21:18:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.804 21:18:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.804 21:18:59 -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.804 21:18:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.804 21:18:59 -- nvmf/common.sh@296 -- # e810=() 00:18:05.804 21:18:59 -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.804 21:18:59 -- nvmf/common.sh@297 -- # x722=() 00:18:05.804 21:18:59 -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.804 21:18:59 -- nvmf/common.sh@298 -- # mlx=() 00:18:05.804 21:18:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.804 21:18:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.804 21:18:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.804 21:18:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.804 21:18:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:05.804 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:05.804 21:18:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.804 21:18:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:05.804 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:05.804 21:18:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.804 21:18:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.804 21:18:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.804 21:18:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:05.804 Found net devices under 0000:27:00.0: cvl_0_0 00:18:05.804 21:18:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.804 21:18:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.804 21:18:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.804 21:18:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.804 21:18:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:05.804 Found net devices under 0000:27:00.1: cvl_0_1 00:18:05.804 21:18:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.804 21:18:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:05.804 21:18:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:05.804 21:18:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:05.804 21:18:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.804 21:18:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.804 21:18:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.804 21:18:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.804 21:18:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.804 21:18:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.805 21:18:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.805 21:18:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.805 21:18:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.805 21:18:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.805 21:18:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.805 21:18:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.805 21:18:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.805 21:19:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.805 21:19:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.805 21:19:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.805 21:19:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.065 21:19:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.065 21:19:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.065 21:19:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:18:06.065 00:18:06.065 --- 10.0.0.2 ping statistics --- 00:18:06.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.065 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:18:06.065 21:19:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:18:06.065 00:18:06.065 --- 10.0.0.1 ping statistics --- 00:18:06.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.065 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:06.065 21:19:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.065 21:19:00 -- nvmf/common.sh@411 -- # return 0 00:18:06.065 21:19:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:06.065 21:19:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.065 21:19:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:06.065 21:19:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:06.065 21:19:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.065 21:19:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:06.065 21:19:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:06.065 21:19:00 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:18:06.065 21:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:06.065 21:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:06.065 21:19:00 -- common/autotest_common.sh@10 -- # set +x 00:18:06.065 ************************************ 00:18:06.065 START TEST nvmf_host_management 00:18:06.065 ************************************ 00:18:06.065 21:19:00 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:18:06.065 21:19:00 -- target/host_management.sh@69 -- # starttarget 00:18:06.065 21:19:00 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:06.065 21:19:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:06.065 21:19:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:06.065 21:19:00 -- common/autotest_common.sh@10 -- # set +x 00:18:06.065 21:19:00 -- nvmf/common.sh@470 -- # nvmfpid=1421737 00:18:06.065 21:19:00 -- nvmf/common.sh@471 -- # waitforlisten 1421737 00:18:06.065 21:19:00 -- common/autotest_common.sh@817 -- # '[' -z 1421737 ']' 00:18:06.065 21:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.065 21:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.065 21:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.065 21:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.065 21:19:00 -- common/autotest_common.sh@10 -- # set +x 00:18:06.065 21:19:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:06.325 [2024-04-23 21:19:00.343967] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:06.325 [2024-04-23 21:19:00.344075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.325 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.325 [2024-04-23 21:19:00.471415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.325 [2024-04-23 21:19:00.571132] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.325 [2024-04-23 21:19:00.571169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.325 [2024-04-23 21:19:00.571181] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.325 [2024-04-23 21:19:00.571191] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.325 [2024-04-23 21:19:00.571199] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.325 [2024-04-23 21:19:00.571374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.325 [2024-04-23 21:19:00.571389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.325 [2024-04-23 21:19:00.571517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.325 [2024-04-23 21:19:00.571547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:06.898 21:19:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.898 21:19:01 -- common/autotest_common.sh@850 -- # return 0 00:18:06.898 21:19:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:06.898 21:19:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:06.898 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:06.898 21:19:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.898 21:19:01 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:06.898 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.898 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:06.898 [2024-04-23 21:19:01.090284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.898 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.898 21:19:01 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:06.898 21:19:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:06.898 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:06.898 21:19:01 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:06.898 21:19:01 -- target/host_management.sh@23 -- # cat 00:18:06.898 21:19:01 -- target/host_management.sh@30 -- # rpc_cmd 00:18:06.898 21:19:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.898 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:06.898 Malloc0 00:18:06.898 [2024-04-23 21:19:01.168157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.159 21:19:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.160 21:19:01 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:07.160 21:19:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:07.160 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:07.160 21:19:01 -- target/host_management.sh@73 -- # perfpid=1421852 00:18:07.160 21:19:01 -- target/host_management.sh@74 -- # waitforlisten 1421852 /var/tmp/bdevperf.sock 00:18:07.160 21:19:01 -- common/autotest_common.sh@817 -- # '[' -z 1421852 ']' 00:18:07.160 21:19:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.160 21:19:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.160 21:19:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.160 21:19:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.160 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:07.160 21:19:01 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:07.160 21:19:01 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:07.160 21:19:01 -- nvmf/common.sh@521 -- # config=() 00:18:07.160 21:19:01 -- nvmf/common.sh@521 -- # local subsystem config 00:18:07.160 21:19:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:07.160 21:19:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:07.160 { 00:18:07.160 "params": { 00:18:07.160 "name": "Nvme$subsystem", 00:18:07.160 "trtype": "$TEST_TRANSPORT", 00:18:07.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.160 "adrfam": "ipv4", 00:18:07.160 "trsvcid": "$NVMF_PORT", 00:18:07.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.160 "hdgst": ${hdgst:-false}, 00:18:07.160 "ddgst": ${ddgst:-false} 00:18:07.160 }, 00:18:07.160 "method": "bdev_nvme_attach_controller" 00:18:07.160 } 00:18:07.160 EOF 00:18:07.160 )") 00:18:07.160 21:19:01 -- nvmf/common.sh@543 -- # cat 00:18:07.160 21:19:01 -- nvmf/common.sh@545 -- # jq . 00:18:07.160 21:19:01 -- nvmf/common.sh@546 -- # IFS=, 00:18:07.160 21:19:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:07.160 "params": { 00:18:07.160 "name": "Nvme0", 00:18:07.160 "trtype": "tcp", 00:18:07.160 "traddr": "10.0.0.2", 00:18:07.160 "adrfam": "ipv4", 00:18:07.160 "trsvcid": "4420", 00:18:07.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:07.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:07.160 "hdgst": false, 00:18:07.160 "ddgst": false 00:18:07.160 }, 00:18:07.160 "method": "bdev_nvme_attach_controller" 00:18:07.160 }' 00:18:07.160 [2024-04-23 21:19:01.304850] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:07.160 [2024-04-23 21:19:01.304989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421852 ] 00:18:07.160 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.419 [2024-04-23 21:19:01.438053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.419 [2024-04-23 21:19:01.531554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.677 Running I/O for 10 seconds... 00:18:07.937 21:19:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.937 21:19:02 -- common/autotest_common.sh@850 -- # return 0 00:18:07.937 21:19:02 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:07.937 21:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.937 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:18:07.937 21:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.937 21:19:02 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.937 21:19:02 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:07.937 21:19:02 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:07.937 21:19:02 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:07.937 21:19:02 -- target/host_management.sh@52 -- # local ret=1 00:18:07.937 21:19:02 -- target/host_management.sh@53 -- # local i 00:18:07.937 21:19:02 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:07.937 21:19:02 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:07.937 21:19:02 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:07.937 21:19:02 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:07.937 21:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.937 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:18:07.937 21:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.937 21:19:02 -- target/host_management.sh@55 -- # read_io_count=259 00:18:07.937 21:19:02 -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:18:07.937 21:19:02 -- target/host_management.sh@59 -- # ret=0 00:18:07.937 21:19:02 -- target/host_management.sh@60 -- # break 00:18:07.937 21:19:02 -- target/host_management.sh@64 -- # return 0 00:18:07.937 21:19:02 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:07.937 21:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.937 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:18:07.937 [2024-04-23 21:19:02.069402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:07.937 [2024-04-23 21:19:02.069889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.937 [2024-04-23 21:19:02.069943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.069966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.069974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.069985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.069993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.938 [2024-04-23 21:19:02.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.938 [2024-04-23 21:19:02.070613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.070987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.070994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.071004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.071011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.071021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.071028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.071038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.071056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.939 [2024-04-23 21:19:02.071065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.939 [2024-04-23 21:19:02.071207] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:18:07.939 [2024-04-23 21:19:02.072119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:07.939 task offset: 45568 on job bdev=Nvme0n1 fails 00:18:07.939 00:18:07.939 Latency(us) 00:18:07.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.939 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:07.939 Job: Nvme0n1 ended in about 0.28 seconds with error 00:18:07.939 Verification LBA range: start 0x0 length 0x400 00:18:07.939 Nvme0n1 : 0.28 1152.66 72.04 230.53 0.00 44997.63 2017.82 40563.33 00:18:07.939 =================================================================================================================== 00:18:07.939 Total : 1152.66 72.04 230.53 0.00 44997.63 2017.82 40563.33 00:18:07.939 21:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.939 [2024-04-23 21:19:02.074466] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:07.939 [2024-04-23 21:19:02.074498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:07.939 21:19:02 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:07.939 21:19:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:07.939 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:18:07.939 21:19:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:07.939 21:19:02 -- target/host_management.sh@87 -- # sleep 1 00:18:07.939 [2024-04-23 21:19:02.208779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:08.882 21:19:03 -- target/host_management.sh@91 -- # kill -9 1421852 00:18:08.882 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1421852) - No such process 00:18:08.882 21:19:03 -- target/host_management.sh@91 -- # true 00:18:08.882 21:19:03 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:08.882 21:19:03 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:08.882 21:19:03 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:08.882 21:19:03 -- nvmf/common.sh@521 -- # config=() 00:18:08.882 21:19:03 -- nvmf/common.sh@521 -- # local subsystem config 00:18:08.882 21:19:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:08.882 21:19:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:08.882 { 00:18:08.882 "params": { 00:18:08.882 "name": "Nvme$subsystem", 00:18:08.882 "trtype": "$TEST_TRANSPORT", 00:18:08.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:08.882 "adrfam": "ipv4", 00:18:08.882 "trsvcid": "$NVMF_PORT", 00:18:08.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:08.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:08.882 "hdgst": ${hdgst:-false}, 00:18:08.882 "ddgst": ${ddgst:-false} 00:18:08.882 }, 00:18:08.882 "method": "bdev_nvme_attach_controller" 00:18:08.882 } 00:18:08.882 EOF 00:18:08.882 )") 00:18:08.882 21:19:03 -- nvmf/common.sh@543 -- # cat 00:18:08.882 21:19:03 -- nvmf/common.sh@545 -- # jq . 00:18:08.882 21:19:03 -- nvmf/common.sh@546 -- # IFS=, 00:18:08.882 21:19:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:08.882 "params": { 00:18:08.882 "name": "Nvme0", 00:18:08.882 "trtype": "tcp", 00:18:08.882 "traddr": "10.0.0.2", 00:18:08.882 "adrfam": "ipv4", 00:18:08.882 "trsvcid": "4420", 00:18:08.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:08.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:08.882 "hdgst": false, 00:18:08.882 "ddgst": false 00:18:08.882 }, 00:18:08.882 "method": "bdev_nvme_attach_controller" 00:18:08.882 }' 00:18:09.140 [2024-04-23 21:19:03.172601] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:09.140 [2024-04-23 21:19:03.172763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422240 ] 00:18:09.140 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.140 [2024-04-23 21:19:03.303467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.140 [2024-04-23 21:19:03.395115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.399 Running I/O for 1 seconds... 00:18:10.776 00:18:10.776 Latency(us) 00:18:10.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.776 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:10.776 Verification LBA range: start 0x0 length 0x400 00:18:10.776 Nvme0n1 : 1.02 1312.44 82.03 0.00 0.00 48223.68 12072.42 37803.92 00:18:10.776 =================================================================================================================== 00:18:10.776 Total : 1312.44 82.03 0.00 0.00 48223.68 12072.42 37803.92 00:18:11.035 21:19:05 -- target/host_management.sh@102 -- # stoptarget 00:18:11.035 21:19:05 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:11.035 21:19:05 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:11.035 21:19:05 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:11.035 21:19:05 -- target/host_management.sh@40 -- # nvmftestfini 00:18:11.035 21:19:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:11.035 21:19:05 -- nvmf/common.sh@117 -- # sync 00:18:11.035 21:19:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.035 21:19:05 -- nvmf/common.sh@120 -- # set +e 00:18:11.035 21:19:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.035 21:19:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.035 rmmod nvme_tcp 00:18:11.035 rmmod nvme_fabrics 00:18:11.035 rmmod nvme_keyring 00:18:11.035 21:19:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.035 21:19:05 -- nvmf/common.sh@124 -- # set -e 00:18:11.035 21:19:05 -- nvmf/common.sh@125 -- # return 0 00:18:11.035 21:19:05 -- nvmf/common.sh@478 -- # '[' -n 1421737 ']' 00:18:11.035 21:19:05 -- nvmf/common.sh@479 -- # killprocess 1421737 00:18:11.035 21:19:05 -- common/autotest_common.sh@936 -- # '[' -z 1421737 ']' 00:18:11.035 21:19:05 -- common/autotest_common.sh@940 -- # kill -0 1421737 00:18:11.035 21:19:05 -- common/autotest_common.sh@941 -- # uname 00:18:11.035 21:19:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.035 21:19:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1421737 00:18:11.035 21:19:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:11.035 21:19:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:11.035 21:19:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1421737' 00:18:11.035 killing process with pid 1421737 00:18:11.035 21:19:05 -- common/autotest_common.sh@955 -- # kill 1421737 00:18:11.035 21:19:05 -- common/autotest_common.sh@960 -- # wait 1421737 00:18:11.601 [2024-04-23 21:19:05.643621] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:11.601 21:19:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:11.601 21:19:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:11.601 21:19:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:11.601 21:19:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.601 21:19:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.601 21:19:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.601 21:19:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.601 21:19:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.508 21:19:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:13.508 00:18:13.508 real 0m7.493s 00:18:13.508 user 0m22.854s 00:18:13.508 sys 0m1.259s 00:18:13.508 21:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.508 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:13.508 ************************************ 00:18:13.508 END TEST nvmf_host_management 00:18:13.508 ************************************ 00:18:13.508 21:19:07 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:13.508 00:18:13.508 real 0m13.436s 00:18:13.508 user 0m24.425s 00:18:13.508 sys 0m5.524s 00:18:13.509 21:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.509 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:13.509 ************************************ 00:18:13.509 END TEST nvmf_host_management 00:18:13.509 ************************************ 00:18:13.770 21:19:07 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:13.770 21:19:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.770 21:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.770 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:13.770 ************************************ 00:18:13.770 START TEST nvmf_lvol 00:18:13.770 ************************************ 00:18:13.770 21:19:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:13.770 * Looking for test storage... 00:18:13.770 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.770 21:19:07 -- nvmf/common.sh@7 -- # uname -s 00:18:13.770 21:19:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.770 21:19:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.770 21:19:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.770 21:19:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.770 21:19:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.770 21:19:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.770 21:19:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.770 21:19:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.770 21:19:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.770 21:19:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.770 21:19:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:13.770 21:19:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:13.770 21:19:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.770 21:19:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.770 21:19:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:13.770 21:19:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.770 21:19:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:13.770 21:19:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.770 21:19:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.770 21:19:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.770 21:19:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.770 21:19:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.770 21:19:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.770 21:19:07 -- paths/export.sh@5 -- # export PATH 00:18:13.770 21:19:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.770 21:19:07 -- nvmf/common.sh@47 -- # : 0 00:18:13.770 21:19:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.770 21:19:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.770 21:19:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.770 21:19:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.770 21:19:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.770 21:19:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.770 21:19:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.770 21:19:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:13.770 21:19:07 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:13.770 21:19:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:13.770 21:19:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.770 21:19:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:13.770 21:19:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:13.770 21:19:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:13.770 21:19:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.770 21:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.770 21:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.770 21:19:08 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:13.770 21:19:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:13.770 21:19:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.770 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:18:19.051 21:19:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:19.051 21:19:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.051 21:19:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.051 21:19:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.051 21:19:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.051 21:19:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.051 21:19:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.051 21:19:13 -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.051 21:19:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.051 21:19:13 -- nvmf/common.sh@296 -- # e810=() 00:18:19.051 21:19:13 -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.051 21:19:13 -- nvmf/common.sh@297 -- # x722=() 00:18:19.051 21:19:13 -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.051 21:19:13 -- nvmf/common.sh@298 -- # mlx=() 00:18:19.051 21:19:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.051 21:19:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.051 21:19:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.051 21:19:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.051 21:19:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:19.051 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:19.051 21:19:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.051 21:19:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:19.051 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:19.051 21:19:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.051 21:19:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.051 21:19:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.051 21:19:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:19.051 Found net devices under 0000:27:00.0: cvl_0_0 00:18:19.051 21:19:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.051 21:19:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.051 21:19:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.051 21:19:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.051 21:19:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:19.051 Found net devices under 0000:27:00.1: cvl_0_1 00:18:19.051 21:19:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.051 21:19:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:19.051 21:19:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:19.051 21:19:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:19.051 21:19:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.051 21:19:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.051 21:19:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.051 21:19:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.051 21:19:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.051 21:19:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.051 21:19:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.051 21:19:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.051 21:19:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.051 21:19:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.051 21:19:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.051 21:19:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.051 21:19:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.051 21:19:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.051 21:19:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.051 21:19:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.051 21:19:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.051 21:19:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.051 21:19:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.309 21:19:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:18:19.309 00:18:19.309 --- 10.0.0.2 ping statistics --- 00:18:19.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.309 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:18:19.309 21:19:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:18:19.309 00:18:19.309 --- 10.0.0.1 ping statistics --- 00:18:19.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.309 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:19.309 21:19:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.309 21:19:13 -- nvmf/common.sh@411 -- # return 0 00:18:19.309 21:19:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:19.309 21:19:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.310 21:19:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:19.310 21:19:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:19.310 21:19:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.310 21:19:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:19.310 21:19:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:19.310 21:19:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:19.310 21:19:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:19.310 21:19:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:19.310 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:19.310 21:19:13 -- nvmf/common.sh@470 -- # nvmfpid=1426650 00:18:19.310 21:19:13 -- nvmf/common.sh@471 -- # waitforlisten 1426650 00:18:19.310 21:19:13 -- common/autotest_common.sh@817 -- # '[' -z 1426650 ']' 00:18:19.310 21:19:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.310 21:19:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.310 21:19:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.310 21:19:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.310 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:18:19.310 21:19:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:19.310 [2024-04-23 21:19:13.439002] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:19.310 [2024-04-23 21:19:13.439099] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.310 [2024-04-23 21:19:13.557836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.601 [2024-04-23 21:19:13.655439] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.601 [2024-04-23 21:19:13.655475] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.601 [2024-04-23 21:19:13.655485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.601 [2024-04-23 21:19:13.655494] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.601 [2024-04-23 21:19:13.655501] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.601 [2024-04-23 21:19:13.655574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.601 [2024-04-23 21:19:13.655673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.601 [2024-04-23 21:19:13.655699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.859 21:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.121 21:19:14 -- common/autotest_common.sh@850 -- # return 0 00:18:20.121 21:19:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:20.121 21:19:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.121 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:20.121 21:19:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.121 21:19:14 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:20.121 [2024-04-23 21:19:14.286120] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.121 21:19:14 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.382 21:19:14 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:20.382 21:19:14 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.640 21:19:14 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:20.640 21:19:14 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:20.640 21:19:14 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:20.897 21:19:14 -- target/nvmf_lvol.sh@29 -- # lvs=7e0ad799-00df-4d35-a3d3-ba5f598a0393 00:18:20.897 21:19:14 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e0ad799-00df-4d35-a3d3-ba5f598a0393 lvol 20 00:18:20.897 21:19:15 -- target/nvmf_lvol.sh@32 -- # lvol=d48e3a52-e155-4886-a813-ae22cd0291d3 00:18:20.897 21:19:15 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:21.154 21:19:15 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d48e3a52-e155-4886-a813-ae22cd0291d3 00:18:21.154 21:19:15 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:21.411 [2024-04-23 21:19:15.514775] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.411 21:19:15 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.411 21:19:15 -- target/nvmf_lvol.sh@42 -- # perf_pid=1427262 00:18:21.411 21:19:15 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:21.411 21:19:15 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:21.671 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.611 21:19:16 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d48e3a52-e155-4886-a813-ae22cd0291d3 MY_SNAPSHOT 00:18:22.611 21:19:16 -- target/nvmf_lvol.sh@47 -- # snapshot=297a891b-92f3-4527-a03c-4383c3fe4e10 00:18:22.611 21:19:16 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d48e3a52-e155-4886-a813-ae22cd0291d3 30 00:18:22.869 21:19:17 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 297a891b-92f3-4527-a03c-4383c3fe4e10 MY_CLONE 00:18:23.126 21:19:17 -- target/nvmf_lvol.sh@49 -- # clone=718e27d6-3eac-4268-aab9-211ad649b4e1 00:18:23.126 21:19:17 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 718e27d6-3eac-4268-aab9-211ad649b4e1 00:18:23.385 21:19:17 -- target/nvmf_lvol.sh@53 -- # wait 1427262 00:18:33.368 Initializing NVMe Controllers 00:18:33.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:33.368 Controller IO queue size 128, less than required. 00:18:33.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:33.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:33.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:33.368 Initialization complete. Launching workers. 00:18:33.368 ======================================================== 00:18:33.368 Latency(us) 00:18:33.368 Device Information : IOPS MiB/s Average min max 00:18:33.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12542.96 49.00 10206.77 413.66 96041.28 00:18:33.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12368.77 48.32 10350.68 2041.31 101172.18 00:18:33.368 ======================================================== 00:18:33.368 Total : 24911.73 97.31 10278.22 413.66 101172.18 00:18:33.368 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d48e3a52-e155-4886-a813-ae22cd0291d3 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e0ad799-00df-4d35-a3d3-ba5f598a0393 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:33.368 21:19:26 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:33.368 21:19:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:33.368 21:19:26 -- nvmf/common.sh@117 -- # sync 00:18:33.368 21:19:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.368 21:19:26 -- nvmf/common.sh@120 -- # set +e 00:18:33.368 21:19:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.368 21:19:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.368 rmmod nvme_tcp 00:18:33.368 rmmod nvme_fabrics 00:18:33.368 rmmod nvme_keyring 00:18:33.368 21:19:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.368 21:19:26 -- nvmf/common.sh@124 -- # set -e 00:18:33.368 21:19:26 -- nvmf/common.sh@125 -- # return 0 00:18:33.368 21:19:26 -- nvmf/common.sh@478 -- # '[' -n 1426650 ']' 00:18:33.368 21:19:26 -- nvmf/common.sh@479 -- # killprocess 1426650 00:18:33.368 21:19:26 -- common/autotest_common.sh@936 -- # '[' -z 1426650 ']' 00:18:33.368 21:19:26 -- common/autotest_common.sh@940 -- # kill -0 1426650 00:18:33.368 21:19:26 -- common/autotest_common.sh@941 -- # uname 00:18:33.368 21:19:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.368 21:19:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1426650 00:18:33.368 21:19:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:33.368 21:19:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:33.368 21:19:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1426650' 00:18:33.368 killing process with pid 1426650 00:18:33.368 21:19:26 -- common/autotest_common.sh@955 -- # kill 1426650 00:18:33.368 21:19:26 -- common/autotest_common.sh@960 -- # wait 1426650 00:18:33.368 21:19:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:33.368 21:19:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:33.368 21:19:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:33.368 21:19:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.368 21:19:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.368 21:19:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.368 21:19:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.368 21:19:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.273 21:19:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.273 00:18:35.273 real 0m21.296s 00:18:35.273 user 1m2.382s 00:18:35.273 sys 0m6.174s 00:18:35.273 21:19:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:35.273 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:18:35.273 ************************************ 00:18:35.273 END TEST nvmf_lvol 00:18:35.273 ************************************ 00:18:35.273 21:19:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:35.273 21:19:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:35.273 21:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.273 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:18:35.273 ************************************ 00:18:35.273 START TEST nvmf_lvs_grow 00:18:35.273 ************************************ 00:18:35.273 21:19:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:35.273 * Looking for test storage... 00:18:35.273 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:35.273 21:19:29 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.273 21:19:29 -- nvmf/common.sh@7 -- # uname -s 00:18:35.273 21:19:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.273 21:19:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.273 21:19:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.273 21:19:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.273 21:19:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.273 21:19:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.273 21:19:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.273 21:19:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.273 21:19:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.273 21:19:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.273 21:19:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:35.273 21:19:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:18:35.273 21:19:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.273 21:19:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.273 21:19:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:35.273 21:19:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.273 21:19:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:35.273 21:19:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.273 21:19:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.273 21:19:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.273 21:19:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.273 21:19:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.273 21:19:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.274 21:19:29 -- paths/export.sh@5 -- # export PATH 00:18:35.274 21:19:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.274 21:19:29 -- nvmf/common.sh@47 -- # : 0 00:18:35.274 21:19:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.274 21:19:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.274 21:19:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.274 21:19:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.274 21:19:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.274 21:19:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.274 21:19:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.274 21:19:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.274 21:19:29 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:35.274 21:19:29 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.274 21:19:29 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:35.274 21:19:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:35.274 21:19:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.274 21:19:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:35.274 21:19:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:35.274 21:19:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:35.274 21:19:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.274 21:19:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.274 21:19:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.274 21:19:29 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:35.274 21:19:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:35.274 21:19:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.274 21:19:29 -- common/autotest_common.sh@10 -- # set +x 00:18:40.551 21:19:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.551 21:19:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.551 21:19:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.551 21:19:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.551 21:19:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.551 21:19:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.551 21:19:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.551 21:19:34 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.551 21:19:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.551 21:19:34 -- nvmf/common.sh@296 -- # e810=() 00:18:40.551 21:19:34 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.551 21:19:34 -- nvmf/common.sh@297 -- # x722=() 00:18:40.551 21:19:34 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.551 21:19:34 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.551 21:19:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.551 21:19:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.551 21:19:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.551 21:19:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.551 21:19:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:40.551 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:40.551 21:19:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.551 21:19:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:40.551 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:40.551 21:19:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.551 21:19:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.551 21:19:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.551 21:19:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:40.551 Found net devices under 0000:27:00.0: cvl_0_0 00:18:40.551 21:19:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.551 21:19:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.551 21:19:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.551 21:19:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.551 21:19:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:40.551 Found net devices under 0000:27:00.1: cvl_0_1 00:18:40.551 21:19:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.551 21:19:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:40.551 21:19:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:40.551 21:19:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:40.551 21:19:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.551 21:19:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.551 21:19:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.551 21:19:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.551 21:19:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.551 21:19:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.551 21:19:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.551 21:19:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.551 21:19:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.551 21:19:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.551 21:19:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.551 21:19:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.551 21:19:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.551 21:19:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.551 21:19:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.551 21:19:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.551 21:19:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.551 21:19:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.551 21:19:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.812 21:19:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:18:40.813 00:18:40.813 --- 10.0.0.2 ping statistics --- 00:18:40.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.813 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:18:40.813 21:19:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:18:40.813 00:18:40.813 --- 10.0.0.1 ping statistics --- 00:18:40.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.813 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:18:40.813 21:19:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.813 21:19:34 -- nvmf/common.sh@411 -- # return 0 00:18:40.813 21:19:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.813 21:19:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.813 21:19:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:40.813 21:19:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:40.813 21:19:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.813 21:19:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:40.813 21:19:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:40.813 21:19:34 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:40.813 21:19:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:40.813 21:19:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:40.813 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:18:40.813 21:19:34 -- nvmf/common.sh@470 -- # nvmfpid=1433218 00:18:40.813 21:19:34 -- nvmf/common.sh@471 -- # waitforlisten 1433218 00:18:40.813 21:19:34 -- common/autotest_common.sh@817 -- # '[' -z 1433218 ']' 00:18:40.813 21:19:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.813 21:19:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:40.813 21:19:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.813 21:19:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:40.813 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:18:40.813 21:19:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:40.813 [2024-04-23 21:19:34.983220] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:40.813 [2024-04-23 21:19:34.983352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.813 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.075 [2024-04-23 21:19:35.126435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.075 [2024-04-23 21:19:35.237853] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.075 [2024-04-23 21:19:35.237893] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.075 [2024-04-23 21:19:35.237903] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.075 [2024-04-23 21:19:35.237913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.075 [2024-04-23 21:19:35.237920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.075 [2024-04-23 21:19:35.237947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.645 21:19:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.645 21:19:35 -- common/autotest_common.sh@850 -- # return 0 00:18:41.645 21:19:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:41.645 21:19:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:41.645 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:18:41.645 21:19:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.645 21:19:35 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.645 [2024-04-23 21:19:35.871599] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.645 21:19:35 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:41.646 21:19:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:41.646 21:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:41.646 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:18:41.906 ************************************ 00:18:41.906 START TEST lvs_grow_clean 00:18:41.906 ************************************ 00:18:41.906 21:19:35 -- common/autotest_common.sh@1111 -- # lvs_grow 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.906 21:19:35 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:41.906 21:19:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:41.906 21:19:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:41.907 21:19:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:42.167 21:19:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:42.167 21:19:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:42.167 21:19:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee lvol 150 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad8c3075-6cf5-4d6a-a82e-e758fb876eff 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:42.430 21:19:36 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:42.711 [2024-04-23 21:19:36.727490] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:42.711 [2024-04-23 21:19:36.727563] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:42.711 true 00:18:42.711 21:19:36 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:42.711 21:19:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:42.711 21:19:36 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:42.711 21:19:36 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:42.990 21:19:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad8c3075-6cf5-4d6a-a82e-e758fb876eff 00:18:42.990 21:19:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 [2024-04-23 21:19:37.263899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.249 21:19:37 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 21:19:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1433856 00:18:43.249 21:19:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.249 21:19:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1433856 /var/tmp/bdevperf.sock 00:18:43.249 21:19:37 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:43.249 21:19:37 -- common/autotest_common.sh@817 -- # '[' -z 1433856 ']' 00:18:43.249 21:19:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.249 21:19:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:43.249 21:19:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.249 21:19:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:43.249 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 [2024-04-23 21:19:37.489586] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:43.249 [2024-04-23 21:19:37.489704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433856 ] 00:18:43.507 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.507 [2024-04-23 21:19:37.602366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.507 [2024-04-23 21:19:37.690607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.076 21:19:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.076 21:19:38 -- common/autotest_common.sh@850 -- # return 0 00:18:44.076 21:19:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:44.337 Nvme0n1 00:18:44.337 21:19:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:44.337 [ 00:18:44.337 { 00:18:44.337 "name": "Nvme0n1", 00:18:44.337 "aliases": [ 00:18:44.337 "ad8c3075-6cf5-4d6a-a82e-e758fb876eff" 00:18:44.337 ], 00:18:44.337 "product_name": "NVMe disk", 00:18:44.337 "block_size": 4096, 00:18:44.337 "num_blocks": 38912, 00:18:44.337 "uuid": "ad8c3075-6cf5-4d6a-a82e-e758fb876eff", 00:18:44.337 "assigned_rate_limits": { 00:18:44.337 "rw_ios_per_sec": 0, 00:18:44.337 "rw_mbytes_per_sec": 0, 00:18:44.337 "r_mbytes_per_sec": 0, 00:18:44.337 "w_mbytes_per_sec": 0 00:18:44.337 }, 00:18:44.337 "claimed": false, 00:18:44.337 "zoned": false, 00:18:44.337 "supported_io_types": { 00:18:44.337 "read": true, 00:18:44.337 "write": true, 00:18:44.337 "unmap": true, 00:18:44.337 "write_zeroes": true, 00:18:44.337 "flush": true, 00:18:44.337 "reset": true, 00:18:44.337 "compare": true, 00:18:44.337 "compare_and_write": true, 00:18:44.337 "abort": true, 00:18:44.337 "nvme_admin": true, 00:18:44.337 "nvme_io": true 00:18:44.337 }, 00:18:44.337 "memory_domains": [ 00:18:44.337 { 00:18:44.337 "dma_device_id": "system", 00:18:44.337 "dma_device_type": 1 00:18:44.337 } 00:18:44.337 ], 00:18:44.337 "driver_specific": { 00:18:44.337 "nvme": [ 00:18:44.337 { 00:18:44.337 "trid": { 00:18:44.337 "trtype": "TCP", 00:18:44.337 "adrfam": "IPv4", 00:18:44.337 "traddr": "10.0.0.2", 00:18:44.337 "trsvcid": "4420", 00:18:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:44.337 }, 00:18:44.337 "ctrlr_data": { 00:18:44.337 "cntlid": 1, 00:18:44.337 "vendor_id": "0x8086", 00:18:44.337 "model_number": "SPDK bdev Controller", 00:18:44.337 "serial_number": "SPDK0", 00:18:44.337 "firmware_revision": "24.05", 00:18:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:44.337 "oacs": { 00:18:44.337 "security": 0, 00:18:44.337 "format": 0, 00:18:44.337 "firmware": 0, 00:18:44.337 "ns_manage": 0 00:18:44.337 }, 00:18:44.337 "multi_ctrlr": true, 00:18:44.337 "ana_reporting": false 00:18:44.337 }, 00:18:44.337 "vs": { 00:18:44.337 "nvme_version": "1.3" 00:18:44.337 }, 00:18:44.337 "ns_data": { 00:18:44.337 "id": 1, 00:18:44.337 "can_share": true 00:18:44.337 } 00:18:44.337 } 00:18:44.337 ], 00:18:44.337 "mp_policy": "active_passive" 00:18:44.337 } 00:18:44.337 } 00:18:44.337 ] 00:18:44.337 21:19:38 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1434090 00:18:44.337 21:19:38 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:44.337 21:19:38 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.600 Running I/O for 10 seconds... 00:18:45.540 Latency(us) 00:18:45.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.540 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:18:45.540 =================================================================================================================== 00:18:45.540 Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:18:45.540 00:18:46.480 21:19:40 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:46.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.480 Nvme0n1 : 2.00 22204.50 86.74 0.00 0.00 0.00 0.00 0.00 00:18:46.480 =================================================================================================================== 00:18:46.480 Total : 22204.50 86.74 0.00 0.00 0.00 0.00 0.00 00:18:46.480 00:18:46.480 true 00:18:46.480 21:19:40 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:46.480 21:19:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:46.742 21:19:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:46.742 21:19:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:46.742 21:19:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 1434090 00:18:47.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.679 Nvme0n1 : 3.00 22261.67 86.96 0.00 0.00 0.00 0.00 0.00 00:18:47.679 =================================================================================================================== 00:18:47.679 Total : 22261.67 86.96 0.00 0.00 0.00 0.00 0.00 00:18:47.679 00:18:48.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.621 Nvme0n1 : 4.00 22344.25 87.28 0.00 0.00 0.00 0.00 0.00 00:18:48.621 =================================================================================================================== 00:18:48.621 Total : 22344.25 87.28 0.00 0.00 0.00 0.00 0.00 00:18:48.621 00:18:49.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.554 Nvme0n1 : 5.00 22254.60 86.93 0.00 0.00 0.00 0.00 0.00 00:18:49.554 =================================================================================================================== 00:18:49.554 Total : 22254.60 86.93 0.00 0.00 0.00 0.00 0.00 00:18:49.554 00:18:50.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.495 Nvme0n1 : 6.00 22285.50 87.05 0.00 0.00 0.00 0.00 0.00 00:18:50.495 =================================================================================================================== 00:18:50.495 Total : 22285.50 87.05 0.00 0.00 0.00 0.00 0.00 00:18:50.495 00:18:51.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.431 Nvme0n1 : 7.00 22329.29 87.22 0.00 0.00 0.00 0.00 0.00 00:18:51.431 =================================================================================================================== 00:18:51.431 Total : 22329.29 87.22 0.00 0.00 0.00 0.00 0.00 00:18:51.431 00:18:52.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.373 Nvme0n1 : 8.00 22346.12 87.29 0.00 0.00 0.00 0.00 0.00 00:18:52.373 =================================================================================================================== 00:18:52.373 Total : 22346.12 87.29 0.00 0.00 0.00 0.00 0.00 00:18:52.373 00:18:53.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.754 Nvme0n1 : 9.00 22362.78 87.35 0.00 0.00 0.00 0.00 0.00 00:18:53.754 =================================================================================================================== 00:18:53.754 Total : 22362.78 87.35 0.00 0.00 0.00 0.00 0.00 00:18:53.754 00:18:54.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.692 Nvme0n1 : 10.00 22376.10 87.41 0.00 0.00 0.00 0.00 0.00 00:18:54.692 =================================================================================================================== 00:18:54.692 Total : 22376.10 87.41 0.00 0.00 0.00 0.00 0.00 00:18:54.692 00:18:54.692 00:18:54.692 Latency(us) 00:18:54.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.692 Nvme0n1 : 10.01 22375.92 87.41 0.00 0.00 5716.39 3725.20 10071.85 00:18:54.692 =================================================================================================================== 00:18:54.692 Total : 22375.92 87.41 0.00 0.00 5716.39 3725.20 10071.85 00:18:54.692 0 00:18:54.692 21:19:48 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1433856 00:18:54.692 21:19:48 -- common/autotest_common.sh@936 -- # '[' -z 1433856 ']' 00:18:54.692 21:19:48 -- common/autotest_common.sh@940 -- # kill -0 1433856 00:18:54.692 21:19:48 -- common/autotest_common.sh@941 -- # uname 00:18:54.692 21:19:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.693 21:19:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1433856 00:18:54.693 21:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:54.693 21:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:54.693 21:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1433856' 00:18:54.693 killing process with pid 1433856 00:18:54.693 21:19:48 -- common/autotest_common.sh@955 -- # kill 1433856 00:18:54.693 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.693 00:18:54.693 Latency(us) 00:18:54.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.693 =================================================================================================================== 00:18:54.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.693 21:19:48 -- common/autotest_common.sh@960 -- # wait 1433856 00:18:54.952 21:19:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:55.212 21:19:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.212 21:19:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:55.212 21:19:49 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:55.212 21:19:49 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:55.212 21:19:49 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:55.212 [2024-04-23 21:19:49.483246] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:55.473 21:19:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.473 21:19:49 -- common/autotest_common.sh@638 -- # local es=0 00:18:55.473 21:19:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.473 21:19:49 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:55.473 21:19:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:55.473 21:19:49 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:55.473 21:19:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:55.473 21:19:49 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:55.473 21:19:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:55.473 21:19:49 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:55.473 21:19:49 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:18:55.473 21:19:49 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.473 request: 00:18:55.473 { 00:18:55.473 "uuid": "8ba0fd13-62da-48c0-bee0-357ab39c59ee", 00:18:55.473 "method": "bdev_lvol_get_lvstores", 00:18:55.473 "req_id": 1 00:18:55.473 } 00:18:55.473 Got JSON-RPC error response 00:18:55.473 response: 00:18:55.473 { 00:18:55.473 "code": -19, 00:18:55.473 "message": "No such device" 00:18:55.473 } 00:18:55.473 21:19:49 -- common/autotest_common.sh@641 -- # es=1 00:18:55.473 21:19:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:55.473 21:19:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:55.473 21:19:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:55.473 21:19:49 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:55.734 aio_bdev 00:18:55.734 21:19:49 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ad8c3075-6cf5-4d6a-a82e-e758fb876eff 00:18:55.734 21:19:49 -- common/autotest_common.sh@885 -- # local bdev_name=ad8c3075-6cf5-4d6a-a82e-e758fb876eff 00:18:55.734 21:19:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:55.734 21:19:49 -- common/autotest_common.sh@887 -- # local i 00:18:55.734 21:19:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:55.734 21:19:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:55.734 21:19:49 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:55.734 21:19:49 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad8c3075-6cf5-4d6a-a82e-e758fb876eff -t 2000 00:18:55.996 [ 00:18:55.996 { 00:18:55.996 "name": "ad8c3075-6cf5-4d6a-a82e-e758fb876eff", 00:18:55.996 "aliases": [ 00:18:55.996 "lvs/lvol" 00:18:55.996 ], 00:18:55.996 "product_name": "Logical Volume", 00:18:55.996 "block_size": 4096, 00:18:55.996 "num_blocks": 38912, 00:18:55.996 "uuid": "ad8c3075-6cf5-4d6a-a82e-e758fb876eff", 00:18:55.996 "assigned_rate_limits": { 00:18:55.996 "rw_ios_per_sec": 0, 00:18:55.996 "rw_mbytes_per_sec": 0, 00:18:55.996 "r_mbytes_per_sec": 0, 00:18:55.996 "w_mbytes_per_sec": 0 00:18:55.996 }, 00:18:55.996 "claimed": false, 00:18:55.996 "zoned": false, 00:18:55.996 "supported_io_types": { 00:18:55.996 "read": true, 00:18:55.996 "write": true, 00:18:55.996 "unmap": true, 00:18:55.996 "write_zeroes": true, 00:18:55.996 "flush": false, 00:18:55.996 "reset": true, 00:18:55.996 "compare": false, 00:18:55.996 "compare_and_write": false, 00:18:55.996 "abort": false, 00:18:55.996 "nvme_admin": false, 00:18:55.996 "nvme_io": false 00:18:55.996 }, 00:18:55.996 "driver_specific": { 00:18:55.996 "lvol": { 00:18:55.996 "lvol_store_uuid": "8ba0fd13-62da-48c0-bee0-357ab39c59ee", 00:18:55.996 "base_bdev": "aio_bdev", 00:18:55.996 "thin_provision": false, 00:18:55.996 "snapshot": false, 00:18:55.996 "clone": false, 00:18:55.996 "esnap_clone": false 00:18:55.996 } 00:18:55.996 } 00:18:55.996 } 00:18:55.996 ] 00:18:55.996 21:19:50 -- common/autotest_common.sh@893 -- # return 0 00:18:55.996 21:19:50 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:55.996 21:19:50 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.996 21:19:50 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:55.996 21:19:50 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:55.996 21:19:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:56.255 21:19:50 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:56.255 21:19:50 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad8c3075-6cf5-4d6a-a82e-e758fb876eff 00:18:56.255 21:19:50 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ba0fd13-62da-48c0-bee0-357ab39c59ee 00:18:56.512 21:19:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:56.770 00:18:56.770 real 0m14.830s 00:18:56.770 user 0m14.438s 00:18:56.770 sys 0m1.175s 00:18:56.770 21:19:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.770 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:18:56.770 ************************************ 00:18:56.770 END TEST lvs_grow_clean 00:18:56.770 ************************************ 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:56.770 21:19:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.770 21:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.770 21:19:50 -- common/autotest_common.sh@10 -- # set +x 00:18:56.770 ************************************ 00:18:56.770 START TEST lvs_grow_dirty 00:18:56.770 ************************************ 00:18:56.770 21:19:50 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:56.770 21:19:50 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:57.029 21:19:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:57.029 21:19:51 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:57.029 21:19:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3afb653-7c61-4628-a1e7-d7579e78ede4 00:18:57.029 21:19:51 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:18:57.029 21:19:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3afb653-7c61-4628-a1e7-d7579e78ede4 lvol 150 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3829759b-995d-4824-ba91-89c6c7158d33 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:57.289 21:19:51 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:57.550 [2024-04-23 21:19:51.634446] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:57.550 [2024-04-23 21:19:51.634510] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:57.550 true 00:18:57.550 21:19:51 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:18:57.550 21:19:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:57.550 21:19:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:57.550 21:19:51 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:57.809 21:19:51 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3829759b-995d-4824-ba91-89c6c7158d33 00:18:57.809 21:19:52 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:58.067 21:19:52 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:58.067 21:19:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1436750 00:18:58.067 21:19:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:58.067 21:19:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1436750 /var/tmp/bdevperf.sock 00:18:58.067 21:19:52 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:58.067 21:19:52 -- common/autotest_common.sh@817 -- # '[' -z 1436750 ']' 00:18:58.067 21:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.067 21:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.067 21:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.067 21:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.067 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:18:58.325 [2024-04-23 21:19:52.393977] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:18:58.325 [2024-04-23 21:19:52.394096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436750 ] 00:18:58.325 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.325 [2024-04-23 21:19:52.505867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.325 [2024-04-23 21:19:52.593781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.893 21:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.893 21:19:53 -- common/autotest_common.sh@850 -- # return 0 00:18:58.893 21:19:53 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:59.153 Nvme0n1 00:18:59.153 21:19:53 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:59.413 [ 00:18:59.413 { 00:18:59.413 "name": "Nvme0n1", 00:18:59.413 "aliases": [ 00:18:59.413 "3829759b-995d-4824-ba91-89c6c7158d33" 00:18:59.413 ], 00:18:59.413 "product_name": "NVMe disk", 00:18:59.413 "block_size": 4096, 00:18:59.413 "num_blocks": 38912, 00:18:59.413 "uuid": "3829759b-995d-4824-ba91-89c6c7158d33", 00:18:59.413 "assigned_rate_limits": { 00:18:59.413 "rw_ios_per_sec": 0, 00:18:59.413 "rw_mbytes_per_sec": 0, 00:18:59.413 "r_mbytes_per_sec": 0, 00:18:59.413 "w_mbytes_per_sec": 0 00:18:59.413 }, 00:18:59.413 "claimed": false, 00:18:59.413 "zoned": false, 00:18:59.413 "supported_io_types": { 00:18:59.413 "read": true, 00:18:59.413 "write": true, 00:18:59.413 "unmap": true, 00:18:59.413 "write_zeroes": true, 00:18:59.414 "flush": true, 00:18:59.414 "reset": true, 00:18:59.414 "compare": true, 00:18:59.414 "compare_and_write": true, 00:18:59.414 "abort": true, 00:18:59.414 "nvme_admin": true, 00:18:59.414 "nvme_io": true 00:18:59.414 }, 00:18:59.414 "memory_domains": [ 00:18:59.414 { 00:18:59.414 "dma_device_id": "system", 00:18:59.414 "dma_device_type": 1 00:18:59.414 } 00:18:59.414 ], 00:18:59.414 "driver_specific": { 00:18:59.414 "nvme": [ 00:18:59.414 { 00:18:59.414 "trid": { 00:18:59.414 "trtype": "TCP", 00:18:59.414 "adrfam": "IPv4", 00:18:59.414 "traddr": "10.0.0.2", 00:18:59.414 "trsvcid": "4420", 00:18:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:59.414 }, 00:18:59.414 "ctrlr_data": { 00:18:59.414 "cntlid": 1, 00:18:59.414 "vendor_id": "0x8086", 00:18:59.414 "model_number": "SPDK bdev Controller", 00:18:59.414 "serial_number": "SPDK0", 00:18:59.414 "firmware_revision": "24.05", 00:18:59.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:59.414 "oacs": { 00:18:59.414 "security": 0, 00:18:59.414 "format": 0, 00:18:59.414 "firmware": 0, 00:18:59.414 "ns_manage": 0 00:18:59.414 }, 00:18:59.414 "multi_ctrlr": true, 00:18:59.414 "ana_reporting": false 00:18:59.414 }, 00:18:59.414 "vs": { 00:18:59.414 "nvme_version": "1.3" 00:18:59.414 }, 00:18:59.414 "ns_data": { 00:18:59.414 "id": 1, 00:18:59.414 "can_share": true 00:18:59.414 } 00:18:59.414 } 00:18:59.414 ], 00:18:59.414 "mp_policy": "active_passive" 00:18:59.414 } 00:18:59.414 } 00:18:59.414 ] 00:18:59.414 21:19:53 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.414 21:19:53 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1436894 00:18:59.414 21:19:53 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:59.414 Running I/O for 10 seconds... 00:19:00.352 Latency(us) 00:19:00.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.352 Nvme0n1 : 1.00 22710.00 88.71 0.00 0.00 0.00 0.00 0.00 00:19:00.352 =================================================================================================================== 00:19:00.352 Total : 22710.00 88.71 0.00 0.00 0.00 0.00 0.00 00:19:00.352 00:19:01.293 21:19:55 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:01.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:01.293 Nvme0n1 : 2.00 22907.00 89.48 0.00 0.00 0.00 0.00 0.00 00:19:01.293 =================================================================================================================== 00:19:01.293 Total : 22907.00 89.48 0.00 0.00 0.00 0.00 0.00 00:19:01.293 00:19:01.555 true 00:19:01.555 21:19:55 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:01.555 21:19:55 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:01.555 21:19:55 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:01.555 21:19:55 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:01.555 21:19:55 -- target/nvmf_lvs_grow.sh@65 -- # wait 1436894 00:19:02.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.490 Nvme0n1 : 3.00 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:19:02.490 =================================================================================================================== 00:19:02.490 Total : 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:19:02.490 00:19:03.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:03.433 Nvme0n1 : 4.00 23005.50 89.87 0.00 0.00 0.00 0.00 0.00 00:19:03.433 =================================================================================================================== 00:19:03.433 Total : 23005.50 89.87 0.00 0.00 0.00 0.00 0.00 00:19:03.433 00:19:04.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:04.370 Nvme0n1 : 5.00 23050.20 90.04 0.00 0.00 0.00 0.00 0.00 00:19:04.370 =================================================================================================================== 00:19:04.370 Total : 23050.20 90.04 0.00 0.00 0.00 0.00 0.00 00:19:04.370 00:19:05.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:05.313 Nvme0n1 : 6.00 23067.00 90.11 0.00 0.00 0.00 0.00 0.00 00:19:05.313 =================================================================================================================== 00:19:05.313 Total : 23067.00 90.11 0.00 0.00 0.00 0.00 0.00 00:19:05.313 00:19:06.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.691 Nvme0n1 : 7.00 23026.43 89.95 0.00 0.00 0.00 0.00 0.00 00:19:06.691 =================================================================================================================== 00:19:06.691 Total : 23026.43 89.95 0.00 0.00 0.00 0.00 0.00 00:19:06.691 00:19:07.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:07.630 Nvme0n1 : 8.00 23044.00 90.02 0.00 0.00 0.00 0.00 0.00 00:19:07.630 =================================================================================================================== 00:19:07.630 Total : 23044.00 90.02 0.00 0.00 0.00 0.00 0.00 00:19:07.630 00:19:08.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.569 Nvme0n1 : 9.00 23076.11 90.14 0.00 0.00 0.00 0.00 0.00 00:19:08.569 =================================================================================================================== 00:19:08.569 Total : 23076.11 90.14 0.00 0.00 0.00 0.00 0.00 00:19:08.569 00:19:09.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.507 Nvme0n1 : 10.00 23079.90 90.16 0.00 0.00 0.00 0.00 0.00 00:19:09.507 =================================================================================================================== 00:19:09.507 Total : 23079.90 90.16 0.00 0.00 0.00 0.00 0.00 00:19:09.507 00:19:09.507 00:19:09.507 Latency(us) 00:19:09.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.507 Nvme0n1 : 10.00 23072.74 90.13 0.00 0.00 5544.10 3449.26 17246.32 00:19:09.507 =================================================================================================================== 00:19:09.507 Total : 23072.74 90.13 0.00 0.00 5544.10 3449.26 17246.32 00:19:09.507 0 00:19:09.507 21:20:03 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1436750 00:19:09.507 21:20:03 -- common/autotest_common.sh@936 -- # '[' -z 1436750 ']' 00:19:09.507 21:20:03 -- common/autotest_common.sh@940 -- # kill -0 1436750 00:19:09.507 21:20:03 -- common/autotest_common.sh@941 -- # uname 00:19:09.507 21:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.507 21:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1436750 00:19:09.507 21:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:09.507 21:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:09.507 21:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1436750' 00:19:09.507 killing process with pid 1436750 00:19:09.507 21:20:03 -- common/autotest_common.sh@955 -- # kill 1436750 00:19:09.507 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.507 00:19:09.507 Latency(us) 00:19:09.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.507 =================================================================================================================== 00:19:09.507 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.507 21:20:03 -- common/autotest_common.sh@960 -- # wait 1436750 00:19:09.766 21:20:03 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1433218 00:19:10.024 21:20:04 -- target/nvmf_lvs_grow.sh@74 -- # wait 1433218 00:19:10.024 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1433218 Killed "${NVMF_APP[@]}" "$@" 00:19:10.283 21:20:04 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:10.283 21:20:04 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:10.283 21:20:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:10.283 21:20:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:10.283 21:20:04 -- common/autotest_common.sh@10 -- # set +x 00:19:10.283 21:20:04 -- nvmf/common.sh@470 -- # nvmfpid=1438961 00:19:10.283 21:20:04 -- nvmf/common.sh@471 -- # waitforlisten 1438961 00:19:10.283 21:20:04 -- common/autotest_common.sh@817 -- # '[' -z 1438961 ']' 00:19:10.283 21:20:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.283 21:20:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.283 21:20:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.283 21:20:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.283 21:20:04 -- common/autotest_common.sh@10 -- # set +x 00:19:10.283 21:20:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:10.283 [2024-04-23 21:20:04.382379] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:10.283 [2024-04-23 21:20:04.382486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.283 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.283 [2024-04-23 21:20:04.511212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.543 [2024-04-23 21:20:04.610106] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.543 [2024-04-23 21:20:04.610145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.543 [2024-04-23 21:20:04.610155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.543 [2024-04-23 21:20:04.610165] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.543 [2024-04-23 21:20:04.610173] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.543 [2024-04-23 21:20:04.610199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.115 21:20:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:11.115 21:20:05 -- common/autotest_common.sh@850 -- # return 0 00:19:11.115 21:20:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:11.115 21:20:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:11.115 21:20:05 -- common/autotest_common.sh@10 -- # set +x 00:19:11.115 21:20:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.115 21:20:05 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:11.115 [2024-04-23 21:20:05.260156] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:11.115 [2024-04-23 21:20:05.260286] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:11.115 [2024-04-23 21:20:05.260317] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:11.115 21:20:05 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:11.115 21:20:05 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3829759b-995d-4824-ba91-89c6c7158d33 00:19:11.115 21:20:05 -- common/autotest_common.sh@885 -- # local bdev_name=3829759b-995d-4824-ba91-89c6c7158d33 00:19:11.115 21:20:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:11.115 21:20:05 -- common/autotest_common.sh@887 -- # local i 00:19:11.115 21:20:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:11.115 21:20:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:11.115 21:20:05 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:11.374 21:20:05 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3829759b-995d-4824-ba91-89c6c7158d33 -t 2000 00:19:11.374 [ 00:19:11.374 { 00:19:11.374 "name": "3829759b-995d-4824-ba91-89c6c7158d33", 00:19:11.374 "aliases": [ 00:19:11.374 "lvs/lvol" 00:19:11.374 ], 00:19:11.374 "product_name": "Logical Volume", 00:19:11.374 "block_size": 4096, 00:19:11.374 "num_blocks": 38912, 00:19:11.374 "uuid": "3829759b-995d-4824-ba91-89c6c7158d33", 00:19:11.374 "assigned_rate_limits": { 00:19:11.374 "rw_ios_per_sec": 0, 00:19:11.374 "rw_mbytes_per_sec": 0, 00:19:11.374 "r_mbytes_per_sec": 0, 00:19:11.374 "w_mbytes_per_sec": 0 00:19:11.374 }, 00:19:11.374 "claimed": false, 00:19:11.374 "zoned": false, 00:19:11.374 "supported_io_types": { 00:19:11.374 "read": true, 00:19:11.374 "write": true, 00:19:11.374 "unmap": true, 00:19:11.374 "write_zeroes": true, 00:19:11.374 "flush": false, 00:19:11.374 "reset": true, 00:19:11.374 "compare": false, 00:19:11.374 "compare_and_write": false, 00:19:11.374 "abort": false, 00:19:11.374 "nvme_admin": false, 00:19:11.374 "nvme_io": false 00:19:11.374 }, 00:19:11.374 "driver_specific": { 00:19:11.374 "lvol": { 00:19:11.374 "lvol_store_uuid": "e3afb653-7c61-4628-a1e7-d7579e78ede4", 00:19:11.374 "base_bdev": "aio_bdev", 00:19:11.374 "thin_provision": false, 00:19:11.374 "snapshot": false, 00:19:11.374 "clone": false, 00:19:11.374 "esnap_clone": false 00:19:11.374 } 00:19:11.374 } 00:19:11.374 } 00:19:11.374 ] 00:19:11.374 21:20:05 -- common/autotest_common.sh@893 -- # return 0 00:19:11.374 21:20:05 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:11.374 21:20:05 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:11.633 21:20:05 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:11.633 21:20:05 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:11.633 21:20:05 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:11.633 21:20:05 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:11.633 21:20:05 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:11.892 [2024-04-23 21:20:05.934351] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:11.893 21:20:05 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:11.893 21:20:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:11.893 21:20:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:11.893 21:20:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:11.893 21:20:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.893 21:20:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:11.893 21:20:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.893 21:20:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:11.893 21:20:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:11.893 21:20:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:11.893 21:20:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:19:11.893 21:20:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:11.893 request: 00:19:11.893 { 00:19:11.893 "uuid": "e3afb653-7c61-4628-a1e7-d7579e78ede4", 00:19:11.893 "method": "bdev_lvol_get_lvstores", 00:19:11.893 "req_id": 1 00:19:11.893 } 00:19:11.893 Got JSON-RPC error response 00:19:11.893 response: 00:19:11.893 { 00:19:11.893 "code": -19, 00:19:11.893 "message": "No such device" 00:19:11.893 } 00:19:11.893 21:20:06 -- common/autotest_common.sh@641 -- # es=1 00:19:11.893 21:20:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:11.893 21:20:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:11.893 21:20:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:11.893 21:20:06 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:12.153 aio_bdev 00:19:12.153 21:20:06 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3829759b-995d-4824-ba91-89c6c7158d33 00:19:12.153 21:20:06 -- common/autotest_common.sh@885 -- # local bdev_name=3829759b-995d-4824-ba91-89c6c7158d33 00:19:12.153 21:20:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:12.153 21:20:06 -- common/autotest_common.sh@887 -- # local i 00:19:12.153 21:20:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:12.153 21:20:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:12.153 21:20:06 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:12.153 21:20:06 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3829759b-995d-4824-ba91-89c6c7158d33 -t 2000 00:19:12.414 [ 00:19:12.414 { 00:19:12.414 "name": "3829759b-995d-4824-ba91-89c6c7158d33", 00:19:12.414 "aliases": [ 00:19:12.414 "lvs/lvol" 00:19:12.414 ], 00:19:12.414 "product_name": "Logical Volume", 00:19:12.414 "block_size": 4096, 00:19:12.414 "num_blocks": 38912, 00:19:12.414 "uuid": "3829759b-995d-4824-ba91-89c6c7158d33", 00:19:12.414 "assigned_rate_limits": { 00:19:12.414 "rw_ios_per_sec": 0, 00:19:12.414 "rw_mbytes_per_sec": 0, 00:19:12.414 "r_mbytes_per_sec": 0, 00:19:12.414 "w_mbytes_per_sec": 0 00:19:12.414 }, 00:19:12.414 "claimed": false, 00:19:12.414 "zoned": false, 00:19:12.414 "supported_io_types": { 00:19:12.414 "read": true, 00:19:12.414 "write": true, 00:19:12.414 "unmap": true, 00:19:12.414 "write_zeroes": true, 00:19:12.414 "flush": false, 00:19:12.414 "reset": true, 00:19:12.414 "compare": false, 00:19:12.414 "compare_and_write": false, 00:19:12.414 "abort": false, 00:19:12.414 "nvme_admin": false, 00:19:12.414 "nvme_io": false 00:19:12.414 }, 00:19:12.414 "driver_specific": { 00:19:12.414 "lvol": { 00:19:12.414 "lvol_store_uuid": "e3afb653-7c61-4628-a1e7-d7579e78ede4", 00:19:12.414 "base_bdev": "aio_bdev", 00:19:12.414 "thin_provision": false, 00:19:12.414 "snapshot": false, 00:19:12.414 "clone": false, 00:19:12.414 "esnap_clone": false 00:19:12.414 } 00:19:12.414 } 00:19:12.414 } 00:19:12.414 ] 00:19:12.414 21:20:06 -- common/autotest_common.sh@893 -- # return 0 00:19:12.414 21:20:06 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:12.414 21:20:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:12.414 21:20:06 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:12.414 21:20:06 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:12.414 21:20:06 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:12.676 21:20:06 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:12.676 21:20:06 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3829759b-995d-4824-ba91-89c6c7158d33 00:19:12.676 21:20:06 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3afb653-7c61-4628-a1e7-d7579e78ede4 00:19:12.936 21:20:07 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:13.195 21:20:07 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:13.195 00:19:13.195 real 0m16.298s 00:19:13.195 user 0m42.485s 00:19:13.195 sys 0m3.132s 00:19:13.195 21:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.195 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 ************************************ 00:19:13.195 END TEST lvs_grow_dirty 00:19:13.195 ************************************ 00:19:13.195 21:20:07 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:13.195 21:20:07 -- common/autotest_common.sh@794 -- # type=--id 00:19:13.195 21:20:07 -- common/autotest_common.sh@795 -- # id=0 00:19:13.195 21:20:07 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:13.195 21:20:07 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:13.195 21:20:07 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:13.195 21:20:07 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:13.195 21:20:07 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:13.195 21:20:07 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:13.195 nvmf_trace.0 00:19:13.195 21:20:07 -- common/autotest_common.sh@809 -- # return 0 00:19:13.195 21:20:07 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:13.195 21:20:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:13.195 21:20:07 -- nvmf/common.sh@117 -- # sync 00:19:13.195 21:20:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.195 21:20:07 -- nvmf/common.sh@120 -- # set +e 00:19:13.195 21:20:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.195 21:20:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.195 rmmod nvme_tcp 00:19:13.195 rmmod nvme_fabrics 00:19:13.195 rmmod nvme_keyring 00:19:13.195 21:20:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.195 21:20:07 -- nvmf/common.sh@124 -- # set -e 00:19:13.195 21:20:07 -- nvmf/common.sh@125 -- # return 0 00:19:13.195 21:20:07 -- nvmf/common.sh@478 -- # '[' -n 1438961 ']' 00:19:13.195 21:20:07 -- nvmf/common.sh@479 -- # killprocess 1438961 00:19:13.195 21:20:07 -- common/autotest_common.sh@936 -- # '[' -z 1438961 ']' 00:19:13.195 21:20:07 -- common/autotest_common.sh@940 -- # kill -0 1438961 00:19:13.195 21:20:07 -- common/autotest_common.sh@941 -- # uname 00:19:13.195 21:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.195 21:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1438961 00:19:13.195 21:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:13.195 21:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:13.195 21:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1438961' 00:19:13.195 killing process with pid 1438961 00:19:13.195 21:20:07 -- common/autotest_common.sh@955 -- # kill 1438961 00:19:13.195 21:20:07 -- common/autotest_common.sh@960 -- # wait 1438961 00:19:13.764 21:20:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:13.764 21:20:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:13.764 21:20:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:13.764 21:20:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.764 21:20:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.764 21:20:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.764 21:20:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.764 21:20:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.751 21:20:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:15.751 00:19:15.751 real 0m40.592s 00:19:15.751 user 1m2.180s 00:19:15.751 sys 0m8.891s 00:19:15.751 21:20:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:15.751 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:19:15.751 ************************************ 00:19:15.751 END TEST nvmf_lvs_grow 00:19:15.751 ************************************ 00:19:15.751 21:20:09 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:15.751 21:20:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.751 21:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.751 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:19:16.013 ************************************ 00:19:16.013 START TEST nvmf_bdev_io_wait 00:19:16.013 ************************************ 00:19:16.013 21:20:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:16.013 * Looking for test storage... 00:19:16.013 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:16.013 21:20:10 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.013 21:20:10 -- nvmf/common.sh@7 -- # uname -s 00:19:16.014 21:20:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.014 21:20:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.014 21:20:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.014 21:20:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.014 21:20:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.014 21:20:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.014 21:20:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.014 21:20:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.014 21:20:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.014 21:20:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.014 21:20:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:16.014 21:20:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:16.014 21:20:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.014 21:20:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.014 21:20:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:16.014 21:20:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.014 21:20:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:16.014 21:20:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.014 21:20:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.014 21:20:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.014 21:20:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.014 21:20:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.014 21:20:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.014 21:20:10 -- paths/export.sh@5 -- # export PATH 00:19:16.014 21:20:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.014 21:20:10 -- nvmf/common.sh@47 -- # : 0 00:19:16.014 21:20:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.014 21:20:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.014 21:20:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.014 21:20:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.014 21:20:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.014 21:20:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.014 21:20:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.014 21:20:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.014 21:20:10 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.014 21:20:10 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.014 21:20:10 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:16.014 21:20:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:16.014 21:20:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.014 21:20:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:16.014 21:20:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:16.014 21:20:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:16.014 21:20:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.014 21:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.014 21:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.014 21:20:10 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:16.014 21:20:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:16.014 21:20:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.014 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:19:21.291 21:20:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.291 21:20:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.291 21:20:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.291 21:20:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.291 21:20:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.291 21:20:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.291 21:20:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.291 21:20:15 -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.291 21:20:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.291 21:20:15 -- nvmf/common.sh@296 -- # e810=() 00:19:21.291 21:20:15 -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.291 21:20:15 -- nvmf/common.sh@297 -- # x722=() 00:19:21.291 21:20:15 -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.291 21:20:15 -- nvmf/common.sh@298 -- # mlx=() 00:19:21.291 21:20:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.291 21:20:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.291 21:20:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.291 21:20:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.291 21:20:15 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:21.291 21:20:15 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:21.291 21:20:15 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:21.291 21:20:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.292 21:20:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:21.292 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:21.292 21:20:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.292 21:20:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:21.292 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:21.292 21:20:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.292 21:20:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.292 21:20:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.292 21:20:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:21.292 Found net devices under 0000:27:00.0: cvl_0_0 00:19:21.292 21:20:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.292 21:20:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.292 21:20:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.292 21:20:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.292 21:20:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:21.292 Found net devices under 0000:27:00.1: cvl_0_1 00:19:21.292 21:20:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.292 21:20:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:21.292 21:20:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:21.292 21:20:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:21.292 21:20:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.292 21:20:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.292 21:20:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.292 21:20:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.292 21:20:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.292 21:20:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.292 21:20:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.292 21:20:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.292 21:20:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.292 21:20:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.292 21:20:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.292 21:20:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.292 21:20:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.292 21:20:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.292 21:20:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.292 21:20:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.292 21:20:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.553 21:20:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.553 21:20:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.553 21:20:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:19:21.553 00:19:21.553 --- 10.0.0.2 ping statistics --- 00:19:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.553 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:19:21.553 21:20:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:19:21.553 00:19:21.553 --- 10.0.0.1 ping statistics --- 00:19:21.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.553 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:19:21.553 21:20:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.553 21:20:15 -- nvmf/common.sh@411 -- # return 0 00:19:21.553 21:20:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:21.553 21:20:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.553 21:20:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:21.553 21:20:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:21.553 21:20:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.553 21:20:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:21.553 21:20:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:21.553 21:20:15 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:21.553 21:20:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:21.553 21:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:21.553 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:19:21.553 21:20:15 -- nvmf/common.sh@470 -- # nvmfpid=1443781 00:19:21.553 21:20:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:21.553 21:20:15 -- nvmf/common.sh@471 -- # waitforlisten 1443781 00:19:21.553 21:20:15 -- common/autotest_common.sh@817 -- # '[' -z 1443781 ']' 00:19:21.553 21:20:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.553 21:20:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.553 21:20:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.553 21:20:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.553 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:19:21.553 [2024-04-23 21:20:15.771281] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:21.553 [2024-04-23 21:20:15.771385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.816 [2024-04-23 21:20:15.893633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.816 [2024-04-23 21:20:15.993770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.816 [2024-04-23 21:20:15.993813] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.816 [2024-04-23 21:20:15.993831] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.816 [2024-04-23 21:20:15.993840] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.816 [2024-04-23 21:20:15.993848] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.816 [2024-04-23 21:20:15.993926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.816 [2024-04-23 21:20:15.993954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.816 [2024-04-23 21:20:15.994057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.816 [2024-04-23 21:20:15.994069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.385 21:20:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.386 21:20:16 -- common/autotest_common.sh@850 -- # return 0 00:19:22.386 21:20:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:22.386 21:20:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.386 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.386 21:20:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.386 21:20:16 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:22.386 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.386 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.386 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.386 21:20:16 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:22.386 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.386 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.386 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.386 21:20:16 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.386 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.386 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.386 [2024-04-23 21:20:16.633682] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.386 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.386 21:20:16 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.386 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.386 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.644 Malloc0 00:19:22.644 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.644 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.644 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.644 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.644 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.644 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.644 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.644 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.644 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:22.644 [2024-04-23 21:20:16.714234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.644 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1443834 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@30 -- # READ_PID=1443836 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1443837 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1443840 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # config=() 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@35 -- # sync 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.644 21:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.644 { 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme$subsystem", 00:19:22.644 "trtype": "$TEST_TRANSPORT", 00:19:22.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "$NVMF_PORT", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.644 "hdgst": ${hdgst:-false}, 00:19:22.644 "ddgst": ${ddgst:-false} 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 } 00:19:22.644 EOF 00:19:22.644 )") 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # config=() 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.644 21:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.644 { 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme$subsystem", 00:19:22.644 "trtype": "$TEST_TRANSPORT", 00:19:22.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "$NVMF_PORT", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.644 "hdgst": ${hdgst:-false}, 00:19:22.644 "ddgst": ${ddgst:-false} 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 } 00:19:22.644 EOF 00:19:22.644 )") 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # config=() 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.644 21:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.644 { 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme$subsystem", 00:19:22.644 "trtype": "$TEST_TRANSPORT", 00:19:22.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "$NVMF_PORT", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.644 "hdgst": ${hdgst:-false}, 00:19:22.644 "ddgst": ${ddgst:-false} 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 } 00:19:22.644 EOF 00:19:22.644 )") 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # config=() 00:19:22.644 21:20:16 -- target/bdev_io_wait.sh@37 -- # wait 1443834 00:19:22.644 21:20:16 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.644 21:20:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.644 { 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme$subsystem", 00:19:22.644 "trtype": "$TEST_TRANSPORT", 00:19:22.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "$NVMF_PORT", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.644 "hdgst": ${hdgst:-false}, 00:19:22.644 "ddgst": ${ddgst:-false} 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 } 00:19:22.644 EOF 00:19:22.644 )") 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # cat 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # cat 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # cat 00:19:22.644 21:20:16 -- nvmf/common.sh@543 -- # cat 00:19:22.644 21:20:16 -- nvmf/common.sh@545 -- # jq . 00:19:22.644 21:20:16 -- nvmf/common.sh@545 -- # jq . 00:19:22.644 21:20:16 -- nvmf/common.sh@545 -- # jq . 00:19:22.644 21:20:16 -- nvmf/common.sh@545 -- # jq . 00:19:22.644 21:20:16 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.644 21:20:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme1", 00:19:22.644 "trtype": "tcp", 00:19:22.644 "traddr": "10.0.0.2", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "4420", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.644 "hdgst": false, 00:19:22.644 "ddgst": false 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 }' 00:19:22.644 21:20:16 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.644 21:20:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme1", 00:19:22.644 "trtype": "tcp", 00:19:22.644 "traddr": "10.0.0.2", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "4420", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.644 "hdgst": false, 00:19:22.644 "ddgst": false 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 }' 00:19:22.644 21:20:16 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.644 21:20:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme1", 00:19:22.644 "trtype": "tcp", 00:19:22.644 "traddr": "10.0.0.2", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "4420", 00:19:22.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.644 "hdgst": false, 00:19:22.644 "ddgst": false 00:19:22.644 }, 00:19:22.644 "method": "bdev_nvme_attach_controller" 00:19:22.644 }' 00:19:22.644 21:20:16 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.644 21:20:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.644 "params": { 00:19:22.644 "name": "Nvme1", 00:19:22.644 "trtype": "tcp", 00:19:22.644 "traddr": "10.0.0.2", 00:19:22.644 "adrfam": "ipv4", 00:19:22.644 "trsvcid": "4420", 00:19:22.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.645 "hdgst": false, 00:19:22.645 "ddgst": false 00:19:22.645 }, 00:19:22.645 "method": "bdev_nvme_attach_controller" 00:19:22.645 }' 00:19:22.645 [2024-04-23 21:20:16.772618] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:22.645 [2024-04-23 21:20:16.772696] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:22.645 [2024-04-23 21:20:16.788749] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:22.645 [2024-04-23 21:20:16.788863] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:22.645 [2024-04-23 21:20:16.790763] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:22.645 [2024-04-23 21:20:16.790866] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:22.645 [2024-04-23 21:20:16.791817] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:22.645 [2024-04-23 21:20:16.791924] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:22.645 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.645 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.903 [2024-04-23 21:20:16.936393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.903 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.903 [2024-04-23 21:20:17.006964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.903 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.903 [2024-04-23 21:20:17.071123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.903 [2024-04-23 21:20:17.073434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:22.903 [2024-04-23 21:20:17.118521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.903 [2024-04-23 21:20:17.134287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.164 [2024-04-23 21:20:17.203421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:23.164 [2024-04-23 21:20:17.252297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:23.424 Running I/O for 1 seconds... 00:19:23.425 Running I/O for 1 seconds... 00:19:23.425 Running I/O for 1 seconds... 00:19:23.425 Running I/O for 1 seconds... 00:19:24.365 00:19:24.365 Latency(us) 00:19:24.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.365 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:24.365 Nvme1n1 : 1.01 12846.97 50.18 0.00 0.00 9926.91 4622.01 19315.87 00:19:24.365 =================================================================================================================== 00:19:24.365 Total : 12846.97 50.18 0.00 0.00 9926.91 4622.01 19315.87 00:19:24.365 00:19:24.365 Latency(us) 00:19:24.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.365 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:24.365 Nvme1n1 : 1.00 135063.17 527.59 0.00 0.00 943.96 379.42 1112.39 00:19:24.365 =================================================================================================================== 00:19:24.365 Total : 135063.17 527.59 0.00 0.00 943.96 379.42 1112.39 00:19:24.623 00:19:24.623 Latency(us) 00:19:24.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.623 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:24.623 Nvme1n1 : 1.01 11174.81 43.65 0.00 0.00 11415.32 5277.37 28145.99 00:19:24.623 =================================================================================================================== 00:19:24.624 Total : 11174.81 43.65 0.00 0.00 11415.32 5277.37 28145.99 00:19:24.624 00:19:24.624 Latency(us) 00:19:24.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.624 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:24.624 Nvme1n1 : 1.01 10876.10 42.48 0.00 0.00 11732.91 6070.70 19591.81 00:19:24.624 =================================================================================================================== 00:19:24.624 Total : 10876.10 42.48 0.00 0.00 11732.91 6070.70 19591.81 00:19:24.881 21:20:19 -- target/bdev_io_wait.sh@38 -- # wait 1443836 00:19:24.882 21:20:19 -- target/bdev_io_wait.sh@39 -- # wait 1443837 00:19:24.882 21:20:19 -- target/bdev_io_wait.sh@40 -- # wait 1443840 00:19:24.882 21:20:19 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.882 21:20:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.882 21:20:19 -- common/autotest_common.sh@10 -- # set +x 00:19:24.882 21:20:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.882 21:20:19 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:24.882 21:20:19 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:24.882 21:20:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:24.882 21:20:19 -- nvmf/common.sh@117 -- # sync 00:19:25.141 21:20:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.141 21:20:19 -- nvmf/common.sh@120 -- # set +e 00:19:25.141 21:20:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.141 21:20:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.141 rmmod nvme_tcp 00:19:25.141 rmmod nvme_fabrics 00:19:25.141 rmmod nvme_keyring 00:19:25.141 21:20:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.141 21:20:19 -- nvmf/common.sh@124 -- # set -e 00:19:25.141 21:20:19 -- nvmf/common.sh@125 -- # return 0 00:19:25.141 21:20:19 -- nvmf/common.sh@478 -- # '[' -n 1443781 ']' 00:19:25.141 21:20:19 -- nvmf/common.sh@479 -- # killprocess 1443781 00:19:25.141 21:20:19 -- common/autotest_common.sh@936 -- # '[' -z 1443781 ']' 00:19:25.141 21:20:19 -- common/autotest_common.sh@940 -- # kill -0 1443781 00:19:25.141 21:20:19 -- common/autotest_common.sh@941 -- # uname 00:19:25.141 21:20:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.141 21:20:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1443781 00:19:25.142 21:20:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:25.142 21:20:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:25.142 21:20:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1443781' 00:19:25.142 killing process with pid 1443781 00:19:25.142 21:20:19 -- common/autotest_common.sh@955 -- # kill 1443781 00:19:25.142 21:20:19 -- common/autotest_common.sh@960 -- # wait 1443781 00:19:25.714 21:20:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:25.714 21:20:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:25.714 21:20:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:25.714 21:20:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.714 21:20:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.714 21:20:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.714 21:20:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.714 21:20:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.623 21:20:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.623 00:19:27.623 real 0m11.697s 00:19:27.623 user 0m23.481s 00:19:27.623 sys 0m5.856s 00:19:27.623 21:20:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:27.623 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:19:27.623 ************************************ 00:19:27.623 END TEST nvmf_bdev_io_wait 00:19:27.623 ************************************ 00:19:27.623 21:20:21 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:27.623 21:20:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:27.623 21:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.623 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:19:27.623 ************************************ 00:19:27.623 START TEST nvmf_queue_depth 00:19:27.623 ************************************ 00:19:27.623 21:20:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:27.882 * Looking for test storage... 00:19:27.882 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:27.882 21:20:21 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.882 21:20:21 -- nvmf/common.sh@7 -- # uname -s 00:19:27.882 21:20:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.882 21:20:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.882 21:20:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.882 21:20:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.882 21:20:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.882 21:20:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.882 21:20:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.882 21:20:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.882 21:20:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.882 21:20:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.882 21:20:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:27.882 21:20:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:27.882 21:20:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.882 21:20:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.882 21:20:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:27.882 21:20:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.883 21:20:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:27.883 21:20:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.883 21:20:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.883 21:20:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.883 21:20:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.883 21:20:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.883 21:20:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.883 21:20:21 -- paths/export.sh@5 -- # export PATH 00:19:27.883 21:20:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.883 21:20:21 -- nvmf/common.sh@47 -- # : 0 00:19:27.883 21:20:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.883 21:20:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.883 21:20:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.883 21:20:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.883 21:20:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.883 21:20:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.883 21:20:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.883 21:20:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.883 21:20:21 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:27.883 21:20:21 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:27.883 21:20:21 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.883 21:20:21 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:27.883 21:20:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:27.883 21:20:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.883 21:20:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:27.883 21:20:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:27.883 21:20:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:27.883 21:20:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.883 21:20:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.883 21:20:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.883 21:20:21 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:27.883 21:20:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:27.883 21:20:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.883 21:20:21 -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 21:20:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:33.163 21:20:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.163 21:20:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.163 21:20:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.163 21:20:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.163 21:20:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.163 21:20:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.163 21:20:26 -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.163 21:20:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.163 21:20:26 -- nvmf/common.sh@296 -- # e810=() 00:19:33.163 21:20:26 -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.163 21:20:26 -- nvmf/common.sh@297 -- # x722=() 00:19:33.163 21:20:26 -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.163 21:20:26 -- nvmf/common.sh@298 -- # mlx=() 00:19:33.163 21:20:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.163 21:20:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.163 21:20:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.163 21:20:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.163 21:20:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.163 21:20:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.164 21:20:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.164 21:20:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.164 21:20:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:33.164 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:33.164 21:20:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.164 21:20:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:33.164 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:33.164 21:20:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.164 21:20:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.164 21:20:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.164 21:20:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:33.164 Found net devices under 0000:27:00.0: cvl_0_0 00:19:33.164 21:20:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.164 21:20:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.164 21:20:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.164 21:20:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.164 21:20:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:33.164 Found net devices under 0000:27:00.1: cvl_0_1 00:19:33.164 21:20:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.164 21:20:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:33.164 21:20:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:33.164 21:20:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:33.164 21:20:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.164 21:20:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.164 21:20:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.164 21:20:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.164 21:20:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.164 21:20:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.164 21:20:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.164 21:20:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.164 21:20:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.164 21:20:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.164 21:20:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.164 21:20:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.164 21:20:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.164 21:20:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.164 21:20:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.164 21:20:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:33.164 21:20:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.164 21:20:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.164 21:20:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.164 21:20:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:33.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:19:33.164 00:19:33.164 --- 10.0.0.2 ping statistics --- 00:19:33.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.164 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:19:33.164 21:20:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:19:33.164 00:19:33.164 --- 10.0.0.1 ping statistics --- 00:19:33.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.164 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:19:33.164 21:20:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.164 21:20:27 -- nvmf/common.sh@411 -- # return 0 00:19:33.164 21:20:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:33.164 21:20:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.164 21:20:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:33.164 21:20:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:33.164 21:20:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.164 21:20:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:33.164 21:20:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:33.164 21:20:27 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:33.164 21:20:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:33.164 21:20:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:33.164 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:33.164 21:20:27 -- nvmf/common.sh@470 -- # nvmfpid=1448338 00:19:33.164 21:20:27 -- nvmf/common.sh@471 -- # waitforlisten 1448338 00:19:33.164 21:20:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.164 21:20:27 -- common/autotest_common.sh@817 -- # '[' -z 1448338 ']' 00:19:33.164 21:20:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.164 21:20:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:33.164 21:20:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.164 21:20:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:33.164 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:19:33.164 [2024-04-23 21:20:27.320227] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:33.164 [2024-04-23 21:20:27.320307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.164 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.164 [2024-04-23 21:20:27.424785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.424 [2024-04-23 21:20:27.517527] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.424 [2024-04-23 21:20:27.517563] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.424 [2024-04-23 21:20:27.517573] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.424 [2024-04-23 21:20:27.517582] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.424 [2024-04-23 21:20:27.517589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.424 [2024-04-23 21:20:27.517614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.992 21:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:33.992 21:20:28 -- common/autotest_common.sh@850 -- # return 0 00:19:33.992 21:20:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:33.992 21:20:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 21:20:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.992 21:20:28 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:33.992 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 [2024-04-23 21:20:28.078079] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.992 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.992 21:20:28 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:33.992 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 Malloc0 00:19:33.992 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.992 21:20:28 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.992 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.992 21:20:28 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:33.992 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.992 21:20:28 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.992 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:33.992 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.992 [2024-04-23 21:20:28.152373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.992 21:20:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.992 21:20:28 -- target/queue_depth.sh@30 -- # bdevperf_pid=1448632 00:19:33.992 21:20:28 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.992 21:20:28 -- target/queue_depth.sh@33 -- # waitforlisten 1448632 /var/tmp/bdevperf.sock 00:19:33.992 21:20:28 -- common/autotest_common.sh@817 -- # '[' -z 1448632 ']' 00:19:33.992 21:20:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.992 21:20:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:33.993 21:20:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.993 21:20:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:33.993 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:33.993 21:20:28 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:33.993 [2024-04-23 21:20:28.224799] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:19:33.993 [2024-04-23 21:20:28.224906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448632 ] 00:19:34.251 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.251 [2024-04-23 21:20:28.336042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.251 [2024-04-23 21:20:28.430106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.821 21:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:34.822 21:20:28 -- common/autotest_common.sh@850 -- # return 0 00:19:34.822 21:20:28 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.822 21:20:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.822 21:20:28 -- common/autotest_common.sh@10 -- # set +x 00:19:34.822 NVMe0n1 00:19:34.822 21:20:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.822 21:20:29 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.822 Running I/O for 10 seconds... 00:19:47.037 00:19:47.037 Latency(us) 00:19:47.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.037 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:47.037 Verification LBA range: start 0x0 length 0x4000 00:19:47.037 NVMe0n1 : 10.06 12150.93 47.46 0.00 0.00 83972.92 17660.23 57947.62 00:19:47.037 =================================================================================================================== 00:19:47.037 Total : 12150.93 47.46 0.00 0.00 83972.92 17660.23 57947.62 00:19:47.037 0 00:19:47.037 21:20:39 -- target/queue_depth.sh@39 -- # killprocess 1448632 00:19:47.037 21:20:39 -- common/autotest_common.sh@936 -- # '[' -z 1448632 ']' 00:19:47.037 21:20:39 -- common/autotest_common.sh@940 -- # kill -0 1448632 00:19:47.037 21:20:39 -- common/autotest_common.sh@941 -- # uname 00:19:47.038 21:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.038 21:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448632 00:19:47.038 21:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.038 21:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.038 21:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448632' 00:19:47.038 killing process with pid 1448632 00:19:47.038 21:20:39 -- common/autotest_common.sh@955 -- # kill 1448632 00:19:47.038 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.038 00:19:47.038 Latency(us) 00:19:47.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.038 =================================================================================================================== 00:19:47.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.038 21:20:39 -- common/autotest_common.sh@960 -- # wait 1448632 00:19:47.038 21:20:39 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:47.038 21:20:39 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:47.038 21:20:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:47.038 21:20:39 -- nvmf/common.sh@117 -- # sync 00:19:47.038 21:20:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.038 21:20:39 -- nvmf/common.sh@120 -- # set +e 00:19:47.038 21:20:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.038 21:20:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.038 rmmod nvme_tcp 00:19:47.038 rmmod nvme_fabrics 00:19:47.038 rmmod nvme_keyring 00:19:47.038 21:20:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.038 21:20:39 -- nvmf/common.sh@124 -- # set -e 00:19:47.038 21:20:39 -- nvmf/common.sh@125 -- # return 0 00:19:47.038 21:20:39 -- nvmf/common.sh@478 -- # '[' -n 1448338 ']' 00:19:47.038 21:20:39 -- nvmf/common.sh@479 -- # killprocess 1448338 00:19:47.038 21:20:39 -- common/autotest_common.sh@936 -- # '[' -z 1448338 ']' 00:19:47.038 21:20:39 -- common/autotest_common.sh@940 -- # kill -0 1448338 00:19:47.038 21:20:39 -- common/autotest_common.sh@941 -- # uname 00:19:47.038 21:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.038 21:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1448338 00:19:47.038 21:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:47.038 21:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:47.038 21:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1448338' 00:19:47.038 killing process with pid 1448338 00:19:47.038 21:20:39 -- common/autotest_common.sh@955 -- # kill 1448338 00:19:47.038 21:20:39 -- common/autotest_common.sh@960 -- # wait 1448338 00:19:47.038 21:20:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.038 21:20:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.038 21:20:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.038 21:20:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.038 21:20:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.038 21:20:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.038 21:20:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.038 21:20:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.419 21:20:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.419 00:19:48.419 real 0m20.391s 00:19:48.419 user 0m25.208s 00:19:48.419 sys 0m5.130s 00:19:48.419 21:20:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:48.419 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:19:48.419 ************************************ 00:19:48.419 END TEST nvmf_queue_depth 00:19:48.419 ************************************ 00:19:48.419 21:20:42 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:48.419 21:20:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:48.419 21:20:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.419 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:19:48.419 ************************************ 00:19:48.419 START TEST nvmf_multipath 00:19:48.419 ************************************ 00:19:48.420 21:20:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:48.420 * Looking for test storage... 00:19:48.420 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:48.420 21:20:42 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.420 21:20:42 -- nvmf/common.sh@7 -- # uname -s 00:19:48.420 21:20:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.420 21:20:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.420 21:20:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.420 21:20:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.420 21:20:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.420 21:20:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.420 21:20:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.420 21:20:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.420 21:20:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.420 21:20:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.420 21:20:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:48.420 21:20:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:48.420 21:20:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.420 21:20:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.420 21:20:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:48.420 21:20:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.420 21:20:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:48.420 21:20:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.420 21:20:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.420 21:20:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.420 21:20:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.420 21:20:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.420 21:20:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.420 21:20:42 -- paths/export.sh@5 -- # export PATH 00:19:48.420 21:20:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.420 21:20:42 -- nvmf/common.sh@47 -- # : 0 00:19:48.420 21:20:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.420 21:20:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.420 21:20:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.420 21:20:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.420 21:20:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.420 21:20:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.420 21:20:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.420 21:20:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.420 21:20:42 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.420 21:20:42 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.420 21:20:42 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:48.420 21:20:42 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:48.420 21:20:42 -- target/multipath.sh@43 -- # nvmftestinit 00:19:48.420 21:20:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:48.420 21:20:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.420 21:20:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:48.420 21:20:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:48.420 21:20:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:48.420 21:20:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.420 21:20:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.420 21:20:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.420 21:20:42 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:48.420 21:20:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:48.420 21:20:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.420 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:19:53.697 21:20:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:53.697 21:20:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.697 21:20:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.697 21:20:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.697 21:20:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.697 21:20:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.697 21:20:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.697 21:20:47 -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.697 21:20:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.698 21:20:47 -- nvmf/common.sh@296 -- # e810=() 00:19:53.698 21:20:47 -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.698 21:20:47 -- nvmf/common.sh@297 -- # x722=() 00:19:53.698 21:20:47 -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.698 21:20:47 -- nvmf/common.sh@298 -- # mlx=() 00:19:53.698 21:20:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.698 21:20:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.698 21:20:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.698 21:20:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.698 21:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:53.698 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:53.698 21:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.698 21:20:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:53.698 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:53.698 21:20:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.698 21:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.698 21:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.698 21:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:53.698 Found net devices under 0000:27:00.0: cvl_0_0 00:19:53.698 21:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.698 21:20:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.698 21:20:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.698 21:20:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.698 21:20:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:53.698 Found net devices under 0000:27:00.1: cvl_0_1 00:19:53.698 21:20:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.698 21:20:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:53.698 21:20:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:53.698 21:20:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.698 21:20:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.698 21:20:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.698 21:20:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:53.698 21:20:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.698 21:20:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.698 21:20:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:53.698 21:20:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.698 21:20:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.698 21:20:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:53.698 21:20:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:53.698 21:20:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.698 21:20:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.698 21:20:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.698 21:20:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.698 21:20:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:53.698 21:20:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.698 21:20:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.698 21:20:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.698 21:20:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:53.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:19:53.698 00:19:53.698 --- 10.0.0.2 ping statistics --- 00:19:53.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.698 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:19:53.698 21:20:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:19:53.698 00:19:53.698 --- 10.0.0.1 ping statistics --- 00:19:53.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.698 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:19:53.698 21:20:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.698 21:20:47 -- nvmf/common.sh@411 -- # return 0 00:19:53.698 21:20:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:53.698 21:20:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.698 21:20:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.698 21:20:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:53.698 21:20:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:53.698 21:20:47 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:53.698 21:20:47 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:53.698 only one NIC for nvmf test 00:19:53.698 21:20:47 -- target/multipath.sh@47 -- # nvmftestfini 00:19:53.698 21:20:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:53.698 21:20:47 -- nvmf/common.sh@117 -- # sync 00:19:53.698 21:20:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.698 21:20:47 -- nvmf/common.sh@120 -- # set +e 00:19:53.698 21:20:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.698 21:20:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.698 rmmod nvme_tcp 00:19:53.698 rmmod nvme_fabrics 00:19:53.698 rmmod nvme_keyring 00:19:53.698 21:20:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.698 21:20:47 -- nvmf/common.sh@124 -- # set -e 00:19:53.698 21:20:47 -- nvmf/common.sh@125 -- # return 0 00:19:53.698 21:20:47 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:53.698 21:20:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:53.698 21:20:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:53.698 21:20:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.698 21:20:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.698 21:20:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.698 21:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.698 21:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.610 21:20:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.610 21:20:49 -- target/multipath.sh@48 -- # exit 0 00:19:55.610 21:20:49 -- target/multipath.sh@1 -- # nvmftestfini 00:19:55.610 21:20:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:55.610 21:20:49 -- nvmf/common.sh@117 -- # sync 00:19:55.610 21:20:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.610 21:20:49 -- nvmf/common.sh@120 -- # set +e 00:19:55.610 21:20:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.610 21:20:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.610 21:20:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.610 21:20:49 -- nvmf/common.sh@124 -- # set -e 00:19:55.610 21:20:49 -- nvmf/common.sh@125 -- # return 0 00:19:55.610 21:20:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:55.610 21:20:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:55.610 21:20:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:55.610 21:20:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:55.610 21:20:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.610 21:20:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.610 21:20:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.610 21:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.610 21:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.610 21:20:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.610 00:19:55.610 real 0m7.443s 00:19:55.610 user 0m1.455s 00:19:55.610 sys 0m3.879s 00:19:55.610 21:20:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.610 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:19:55.610 ************************************ 00:19:55.610 END TEST nvmf_multipath 00:19:55.610 ************************************ 00:19:55.872 21:20:49 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:55.872 21:20:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:55.872 21:20:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.872 21:20:49 -- common/autotest_common.sh@10 -- # set +x 00:19:55.872 ************************************ 00:19:55.872 START TEST nvmf_zcopy 00:19:55.872 ************************************ 00:19:55.872 21:20:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:55.872 * Looking for test storage... 00:19:55.872 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:55.872 21:20:50 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.872 21:20:50 -- nvmf/common.sh@7 -- # uname -s 00:19:55.872 21:20:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.872 21:20:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.872 21:20:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.872 21:20:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.872 21:20:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.872 21:20:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.872 21:20:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.872 21:20:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.872 21:20:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.872 21:20:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.872 21:20:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:55.872 21:20:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:19:55.872 21:20:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.872 21:20:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.872 21:20:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:55.872 21:20:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.872 21:20:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:55.872 21:20:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.873 21:20:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.873 21:20:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.873 21:20:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.873 21:20:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.873 21:20:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.873 21:20:50 -- paths/export.sh@5 -- # export PATH 00:19:55.873 21:20:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.873 21:20:50 -- nvmf/common.sh@47 -- # : 0 00:19:55.873 21:20:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.873 21:20:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.873 21:20:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.873 21:20:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.873 21:20:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.873 21:20:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.873 21:20:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.873 21:20:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.873 21:20:50 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:55.873 21:20:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:55.873 21:20:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.873 21:20:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:55.873 21:20:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:55.873 21:20:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:55.873 21:20:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.873 21:20:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.873 21:20:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.873 21:20:50 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:19:55.873 21:20:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:55.873 21:20:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.873 21:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:01.153 21:20:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:01.153 21:20:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.153 21:20:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.153 21:20:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.153 21:20:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.153 21:20:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.153 21:20:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.153 21:20:55 -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.153 21:20:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.153 21:20:55 -- nvmf/common.sh@296 -- # e810=() 00:20:01.153 21:20:55 -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.153 21:20:55 -- nvmf/common.sh@297 -- # x722=() 00:20:01.153 21:20:55 -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.153 21:20:55 -- nvmf/common.sh@298 -- # mlx=() 00:20:01.153 21:20:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.153 21:20:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.153 21:20:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.153 21:20:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.153 21:20:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:01.153 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:01.153 21:20:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.153 21:20:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:01.153 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:01.153 21:20:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.153 21:20:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.153 21:20:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.153 21:20:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:01.153 Found net devices under 0000:27:00.0: cvl_0_0 00:20:01.153 21:20:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.153 21:20:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.153 21:20:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.153 21:20:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.153 21:20:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:01.153 Found net devices under 0000:27:00.1: cvl_0_1 00:20:01.153 21:20:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.153 21:20:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:01.153 21:20:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:01.153 21:20:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:01.153 21:20:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.153 21:20:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.153 21:20:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.153 21:20:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.153 21:20:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.153 21:20:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.153 21:20:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.153 21:20:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.153 21:20:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.153 21:20:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.153 21:20:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.153 21:20:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.153 21:20:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.414 21:20:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.414 21:20:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.414 21:20:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.414 21:20:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.414 21:20:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.414 21:20:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.414 21:20:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:20:01.414 00:20:01.414 --- 10.0.0.2 ping statistics --- 00:20:01.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.414 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:20:01.414 21:20:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:01.676 00:20:01.676 --- 10.0.0.1 ping statistics --- 00:20:01.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.676 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:01.676 21:20:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.676 21:20:55 -- nvmf/common.sh@411 -- # return 0 00:20:01.676 21:20:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:01.676 21:20:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.676 21:20:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:01.676 21:20:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:01.676 21:20:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.676 21:20:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:01.676 21:20:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:01.676 21:20:55 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:01.676 21:20:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:01.676 21:20:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:01.676 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:20:01.676 21:20:55 -- nvmf/common.sh@470 -- # nvmfpid=1458736 00:20:01.676 21:20:55 -- nvmf/common.sh@471 -- # waitforlisten 1458736 00:20:01.676 21:20:55 -- common/autotest_common.sh@817 -- # '[' -z 1458736 ']' 00:20:01.676 21:20:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.676 21:20:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.676 21:20:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.676 21:20:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.676 21:20:55 -- common/autotest_common.sh@10 -- # set +x 00:20:01.676 21:20:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:01.676 [2024-04-23 21:20:55.793090] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:20:01.676 [2024-04-23 21:20:55.793194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.676 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.676 [2024-04-23 21:20:55.917104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.961 [2024-04-23 21:20:56.014517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.961 [2024-04-23 21:20:56.014555] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.961 [2024-04-23 21:20:56.014565] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.961 [2024-04-23 21:20:56.014575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.961 [2024-04-23 21:20:56.014582] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.961 [2024-04-23 21:20:56.014611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.273 21:20:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.273 21:20:56 -- common/autotest_common.sh@850 -- # return 0 00:20:02.273 21:20:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:02.273 21:20:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:02.273 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.273 21:20:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.273 21:20:56 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:02.273 21:20:56 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:02.273 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.273 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.273 [2024-04-23 21:20:56.542400] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.273 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.534 21:20:56 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:02.534 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.534 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.534 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.534 21:20:56 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.534 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.534 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.534 [2024-04-23 21:20:56.558578] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.534 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.534 21:20:56 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:02.535 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.535 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.535 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.535 21:20:56 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:02.535 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.535 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.535 malloc0 00:20:02.535 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.535 21:20:56 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.535 21:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.535 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:02.535 21:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.535 21:20:56 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:02.535 21:20:56 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:02.535 21:20:56 -- nvmf/common.sh@521 -- # config=() 00:20:02.535 21:20:56 -- nvmf/common.sh@521 -- # local subsystem config 00:20:02.535 21:20:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:02.535 21:20:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:02.535 { 00:20:02.535 "params": { 00:20:02.535 "name": "Nvme$subsystem", 00:20:02.535 "trtype": "$TEST_TRANSPORT", 00:20:02.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.535 "adrfam": "ipv4", 00:20:02.535 "trsvcid": "$NVMF_PORT", 00:20:02.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.535 "hdgst": ${hdgst:-false}, 00:20:02.535 "ddgst": ${ddgst:-false} 00:20:02.535 }, 00:20:02.535 "method": "bdev_nvme_attach_controller" 00:20:02.535 } 00:20:02.535 EOF 00:20:02.535 )") 00:20:02.535 21:20:56 -- nvmf/common.sh@543 -- # cat 00:20:02.535 21:20:56 -- nvmf/common.sh@545 -- # jq . 00:20:02.535 21:20:56 -- nvmf/common.sh@546 -- # IFS=, 00:20:02.535 21:20:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:02.535 "params": { 00:20:02.535 "name": "Nvme1", 00:20:02.535 "trtype": "tcp", 00:20:02.535 "traddr": "10.0.0.2", 00:20:02.535 "adrfam": "ipv4", 00:20:02.535 "trsvcid": "4420", 00:20:02.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.535 "hdgst": false, 00:20:02.535 "ddgst": false 00:20:02.535 }, 00:20:02.535 "method": "bdev_nvme_attach_controller" 00:20:02.535 }' 00:20:02.535 [2024-04-23 21:20:56.692385] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:20:02.535 [2024-04-23 21:20:56.692491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459034 ] 00:20:02.535 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.535 [2024-04-23 21:20:56.805952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.796 [2024-04-23 21:20:56.902245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.058 Running I/O for 10 seconds... 00:20:13.049 00:20:13.049 Latency(us) 00:20:13.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:13.049 Verification LBA range: start 0x0 length 0x1000 00:20:13.049 Nvme1n1 : 10.01 8576.32 67.00 0.00 0.00 14885.37 1073.58 25110.64 00:20:13.049 =================================================================================================================== 00:20:13.049 Total : 8576.32 67.00 0.00 0.00 14885.37 1073.58 25110.64 00:20:13.308 21:21:07 -- target/zcopy.sh@39 -- # perfpid=1461667 00:20:13.308 21:21:07 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:13.308 21:21:07 -- common/autotest_common.sh@10 -- # set +x 00:20:13.308 21:21:07 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:13.308 21:21:07 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:13.308 21:21:07 -- nvmf/common.sh@521 -- # config=() 00:20:13.308 21:21:07 -- nvmf/common.sh@521 -- # local subsystem config 00:20:13.308 21:21:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:13.308 21:21:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:13.308 { 00:20:13.308 "params": { 00:20:13.309 "name": "Nvme$subsystem", 00:20:13.309 "trtype": "$TEST_TRANSPORT", 00:20:13.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:13.309 "adrfam": "ipv4", 00:20:13.309 "trsvcid": "$NVMF_PORT", 00:20:13.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:13.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:13.309 "hdgst": ${hdgst:-false}, 00:20:13.309 "ddgst": ${ddgst:-false} 00:20:13.309 }, 00:20:13.309 "method": "bdev_nvme_attach_controller" 00:20:13.309 } 00:20:13.309 EOF 00:20:13.309 )") 00:20:13.309 [2024-04-23 21:21:07.512181] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 21:21:07 -- nvmf/common.sh@543 -- # cat 00:20:13.309 [2024-04-23 21:21:07.512224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 21:21:07 -- nvmf/common.sh@545 -- # jq . 00:20:13.309 [2024-04-23 21:21:07.520144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.520163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 21:21:07 -- nvmf/common.sh@546 -- # IFS=, 00:20:13.309 21:21:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:13.309 "params": { 00:20:13.309 "name": "Nvme1", 00:20:13.309 "trtype": "tcp", 00:20:13.309 "traddr": "10.0.0.2", 00:20:13.309 "adrfam": "ipv4", 00:20:13.309 "trsvcid": "4420", 00:20:13.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.309 "hdgst": false, 00:20:13.309 "ddgst": false 00:20:13.309 }, 00:20:13.309 "method": "bdev_nvme_attach_controller" 00:20:13.309 }' 00:20:13.309 [2024-04-23 21:21:07.528123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.528140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.536140] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.536158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.544131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.544145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.552114] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.552128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.560129] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.560144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.568127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.568141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.576118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.309 [2024-04-23 21:21:07.576130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.309 [2024-04-23 21:21:07.576776] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:20:13.309 [2024-04-23 21:21:07.576885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461667 ] 00:20:13.570 [2024-04-23 21:21:07.584132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.584146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.592126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.592139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.600135] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.600148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.608135] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.608149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.616132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.616145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.624143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.624157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.632143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.632157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.640151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.640165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.648149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.648162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.570 [2024-04-23 21:21:07.656156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.656171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.664155] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.664168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.672154] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.672167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.680149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.680162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.687166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.570 [2024-04-23 21:21:07.688162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.688174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.696168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.696181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.704160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.704173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.712171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.712183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.720186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.720199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.728177] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.728191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.736178] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.736192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.744174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.744186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.752191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.752204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.760184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.760199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.768179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.768191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.775549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.570 [2024-04-23 21:21:07.776192] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.776205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.784185] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.784198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.570 [2024-04-23 21:21:07.792196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.570 [2024-04-23 21:21:07.792210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.800194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.800208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.808190] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.808203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.816217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.816231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.824201] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.824214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.832195] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.832209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.571 [2024-04-23 21:21:07.840202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.571 [2024-04-23 21:21:07.840215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.832 [2024-04-23 21:21:07.848205] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.832 [2024-04-23 21:21:07.848219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.832 [2024-04-23 21:21:07.856204] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.832 [2024-04-23 21:21:07.856217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.832 [2024-04-23 21:21:07.864207] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.864224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.872206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.872219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.880210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.880223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.888214] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.888227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.896213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.896226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.904220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.904233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.912215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.912228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.920226] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.920239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.928241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.928265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.936239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.936263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.944256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.944275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.952250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.952272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.960255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.960274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.968253] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.968271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.976244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.976258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.984250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.984265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:07.992251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:07.992267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.000244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.000259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.008266] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.008285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.016261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.016276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.024257] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.024271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.032267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.032281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.040270] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.040284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.048273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.048287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.056284] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.056305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.064270] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.064285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.072283] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.072297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.080296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.080310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.088285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.088299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.096296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.096312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.833 [2024-04-23 21:21:08.104285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.833 [2024-04-23 21:21:08.104299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.112317] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.112342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 Running I/O for 5 seconds... 00:20:14.094 [2024-04-23 21:21:08.120304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.120320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.134999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.135028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.143960] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.143989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.152591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.152618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.161380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.161408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.169878] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.169904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.179047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.179074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.188707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.188733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.197510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.197536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.206047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.206074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.215685] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.215711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.224248] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.224280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.232902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.232931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.242051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.242077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.251212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.251237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.259815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.259841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.269013] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.269040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.278335] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.278361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.287189] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.287215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.296162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.296188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.304893] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.304917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.314228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.314259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.321267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.321291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.331600] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.331627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.340419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.340445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.349683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.349708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.094 [2024-04-23 21:21:08.359392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.094 [2024-04-23 21:21:08.359419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.368094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.368122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.375530] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.375558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.386773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.386798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.395678] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.395707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.405217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.405241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.414142] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.414168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.423589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.423615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.433243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.433271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.442001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.442027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.451861] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.451888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.460755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.460781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.469625] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.469659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.478565] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.478593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.488083] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.488110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.496860] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.496884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.505520] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.505548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.513907] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.513933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.523236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.523262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.531513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.531540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.540143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.540169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.549284] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.549309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.558993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.559021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.567656] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.567683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.577319] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.577345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.586198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.586226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.594859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.594885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.604071] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.604098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.613108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.613133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.355 [2024-04-23 21:21:08.622349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.355 [2024-04-23 21:21:08.622378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.631080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.631109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.640319] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.640344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.649563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.649591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.659144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.659171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.667941] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.667967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.676733] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.676759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.685754] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.685780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.616 [2024-04-23 21:21:08.694884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.616 [2024-04-23 21:21:08.694908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.704216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.704242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.713724] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.713750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.722451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.722478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.731703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.731730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.740826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.740850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.749575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.749602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.758857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.758883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.768217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.768244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.782734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.782760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.790165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.790193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.799141] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.799167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.808606] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.808641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.817732] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.817758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.827523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.827550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.836991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.837017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.846106] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.846134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.855293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.855319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.864238] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.864263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.873495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.873520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.617 [2024-04-23 21:21:08.882659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.617 [2024-04-23 21:21:08.882685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.891950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.891974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.901164] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.901191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.910408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.910433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.919946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.919974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.929772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.929799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.939215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.939240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.948524] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.948551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.957773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.957798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.967147] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.967172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.976411] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.976435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.986287] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.986311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:08.995558] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:08.995583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:09.004239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:09.004266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:09.013804] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:09.013829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:09.022952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.879 [2024-04-23 21:21:09.022978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.879 [2024-04-23 21:21:09.032261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.032288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.041510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.041534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.050891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.050917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.060272] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.060297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.069598] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.069625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.078820] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.078845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.087525] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.087548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.096193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.096220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.105654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.105679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.115058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.115083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.123850] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.123875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.132641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.132713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.142089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.142115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.880 [2024-04-23 21:21:09.151734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.880 [2024-04-23 21:21:09.151760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.161559] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.161585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.170909] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.170936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.180072] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.180097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.189474] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.189500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.199030] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.199055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.208282] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.208308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.217581] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.217606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.226722] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.226747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.235986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.236012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.245242] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.245266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.253931] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.253957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.263312] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.263342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.273100] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.273128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.282537] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.282562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.291717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.291744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.300894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.300920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.309539] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.309565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.318692] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.318718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.327996] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.328022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.337217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.337242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.346696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.346721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.355957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.355983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.364790] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.364815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.374847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.374872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.384090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.384113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.393433] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.393458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.402711] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.402736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.143 [2024-04-23 21:21:09.412494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.143 [2024-04-23 21:21:09.412519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.405 [2024-04-23 21:21:09.421844] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.405 [2024-04-23 21:21:09.421870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.405 [2024-04-23 21:21:09.430414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.405 [2024-04-23 21:21:09.430440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.439619] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.439653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.448759] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.448783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.458493] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.458518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.467125] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.467151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.476414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.476438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.485681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.485708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.494834] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.494859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.504085] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.504111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.513268] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.513292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.522524] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.522550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.531783] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.531807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.540995] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.541020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.549990] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.550015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.558928] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.558954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.568650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.568676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.577316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.577341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.585924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.585950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.595158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.595183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.604378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.604404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.613488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.613516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.622824] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.622851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.632749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.632774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.642097] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.642124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.651527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.651552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.660196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.660221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.668943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.668966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.406 [2024-04-23 21:21:09.678217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.406 [2024-04-23 21:21:09.678243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.687420] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.687448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.696073] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.696100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.705168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.705192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.713790] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.713817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.723638] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.723663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.732440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.732465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.741712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.741737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.751112] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.751134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.760341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.760368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.769602] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.769634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.778866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.778890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.788062] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.788094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.797251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.797276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.806708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.806733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.815809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.815835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.825622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.825651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.834520] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.834547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.843909] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.843936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.853382] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.853407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.863210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.863235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.871860] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.871884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.881197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.881224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.891016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.891041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.900333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.900356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.910223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.910250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.919090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.919114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.928303] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.928329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.668 [2024-04-23 21:21:09.937741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.668 [2024-04-23 21:21:09.937767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.927 [2024-04-23 21:21:09.946649] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.946676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:09.955809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.955834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:09.965172] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.965199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:09.974703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.974732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:09.982703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.982730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:09.993050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:09.993075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.001913] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.001939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.011598] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.011641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.021745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.021776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.030807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.030839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.040119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.040149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.048127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.048153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.056746] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.056773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.065013] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.065039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.073414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.073443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.083580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.083611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.092259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.092288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.103052] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.103079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.112316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.112343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.121438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.121464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.130838] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.130866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.139664] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.139693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.149460] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.149486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.158344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.158371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.167717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.167744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.177181] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.177208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.185986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.186011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.928 [2024-04-23 21:21:10.196059] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.928 [2024-04-23 21:21:10.196084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.204889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.204916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.213764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.213790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.222847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.222872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.231797] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.231824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.241126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.241153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.250354] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.250380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.259958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.259986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.269541] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.269568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.279067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.279093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.288188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.288215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.189 [2024-04-23 21:21:10.297280] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.189 [2024-04-23 21:21:10.297306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.306788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.306815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.315871] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.315897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.325961] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.325989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.335907] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.335939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.345124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.345150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.354014] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.354039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.363328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.363355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.372057] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.372083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.380810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.380839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.389734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.389761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.398675] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.398700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.407880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.407907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.417680] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.417706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.426461] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.426489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.435873] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.435902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.445059] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.445086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.190 [2024-04-23 21:21:10.455025] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.190 [2024-04-23 21:21:10.455052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.463856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.463884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.473407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.473435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.482579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.482604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.492070] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.492094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.500741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.500767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.510161] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.510187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.519552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.519576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.528868] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.528892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.537558] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.537583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.546814] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.546839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.555902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.555927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.565054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.565078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.575040] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.575068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.584144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.584169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.593686] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.593710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.603284] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.603311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.612614] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.612644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.621831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.621855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.631647] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.631672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.641006] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.641030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.650409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.650433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.659795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.659826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.668399] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.668423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.677908] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.677934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.450 [2024-04-23 21:21:10.687227] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.450 [2024-04-23 21:21:10.687252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.451 [2024-04-23 21:21:10.696555] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.451 [2024-04-23 21:21:10.696581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.451 [2024-04-23 21:21:10.705853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.451 [2024-04-23 21:21:10.705879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.451 [2024-04-23 21:21:10.715316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.451 [2024-04-23 21:21:10.715344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.712 [2024-04-23 21:21:10.724456] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.712 [2024-04-23 21:21:10.724483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.712 [2024-04-23 21:21:10.733165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.712 [2024-04-23 21:21:10.733191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.742511] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.742535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.751882] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.751906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.761807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.761831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.770556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.770579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.780333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.780359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.789049] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.789073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.798153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.798179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.807313] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.807338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.816329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.816353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.825372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.825395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.834681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.834711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.844032] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.844056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.854133] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.854160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.863544] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.863568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.872977] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.873004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.882189] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.882216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.891362] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.891386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.901451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.901477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.910214] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.910239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.920223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.920249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.929564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.929590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.938811] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.938836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.947926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.947952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.957814] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.957839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.967370] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.967394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.713 [2024-04-23 21:21:10.976879] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.713 [2024-04-23 21:21:10.976903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:10.986173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:10.986202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:10.995450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:10.995475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.004670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.004697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.013736] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.013765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.022855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.022883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.031975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.032001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.041451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.041477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.050329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.050354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.058953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.058976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.068311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.068337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.077610] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.077640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.086856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.086881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.096713] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.096740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.106093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.106118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.115388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.115414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.124579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.124603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.133977] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.134003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.143670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.143694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.152498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.152525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.161006] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.161030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.170310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.170337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.179655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.179679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.189497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.189529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.198841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.198868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.208581] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.208605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.217380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.217404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.226633] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.226661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.235756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.235781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:16.975 [2024-04-23 21:21:11.244969] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:16.975 [2024-04-23 21:21:11.244994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.254330] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.254356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.263733] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.263757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.273263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.273290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.282315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.282340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.290828] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.290853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.300259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.300283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.309624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.309655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.318846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.318870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.327642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.327666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.337509] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.337536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.347307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.347333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.356622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.356651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.365394] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.365419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.374755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.374783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.383942] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.383968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.393205] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.393231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.403069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.403094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.236 [2024-04-23 21:21:11.411747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.236 [2024-04-23 21:21:11.411771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.421089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.421114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.430418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.430442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.439210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.439236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.448435] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.448460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.458324] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.458349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.467848] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.467874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.477216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.477243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.486595] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.486620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.496021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.496049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.237 [2024-04-23 21:21:11.504819] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.237 [2024-04-23 21:21:11.504845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.514231] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.514259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.523649] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.523675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.533477] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.533505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.541384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.541411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.552129] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.552161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.561278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.561306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.570774] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.570801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.579289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.579313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.588514] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.588542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.597702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.597728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.615470] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.497 [2024-04-23 21:21:11.615499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.497 [2024-04-23 21:21:11.624262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.624288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.633946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.633976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.641945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.641974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.652230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.652257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.662074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.662100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.670866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.670892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.680128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.680154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.689334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.689361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.698061] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.698086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.707578] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.707603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.717068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.717094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.726389] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.726415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.735590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.735616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.745562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.745587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.754947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.754975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.498 [2024-04-23 21:21:11.764124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.498 [2024-04-23 21:21:11.764148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.773584] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.773611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.783104] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.783130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.792324] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.792350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.801585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.801611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.811495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.811522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.820730] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.820757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.830230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.830257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.839472] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.839498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.848582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.848608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.857550] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.857577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.866922] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.866947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.876235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.876260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.885633] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.885658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.894752] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.894778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.904302] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.904328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.914252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.914278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.923124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.923148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.932315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.932343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.941815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.941841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.955919] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.955947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.969720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.969747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.978602] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.978634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.987461] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.987487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:11.997345] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:11.997369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:12.006129] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:12.006154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.757 [2024-04-23 21:21:12.015444] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.757 [2024-04-23 21:21:12.015471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:17.758 [2024-04-23 21:21:12.024916] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:17.758 [2024-04-23 21:21:12.024943] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.034786] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.034812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.043636] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.043662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.052768] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.052792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.062119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.062146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.071360] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.071384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.081278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.081309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.090043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.090068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.099217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.099243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.108658] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.108684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.117968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.117993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.127220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.127244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.136662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.136688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.145942] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.145967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.155418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.155445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.164083] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.164109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.173967] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.173991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.183359] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.183383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.192709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.192733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.201946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.201970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.211129] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.211154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.220311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.220336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.229545] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.229571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.238735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.238766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.248054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.248082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.256688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.256717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.266457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.266484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.275098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.275123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.017 [2024-04-23 21:21:12.284214] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.017 [2024-04-23 21:21:12.284242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.292997] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.293022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.302193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.302219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.311857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.311882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.320737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.320763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.329874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.329899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.339696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.339723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.349674] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.349699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.358958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.358983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.367714] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.367738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.377120] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.377145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.386518] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.386544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.395664] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.395689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.405004] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.405028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.414293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.414317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.423712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.423738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.432398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.432427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.441742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.441767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.451089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.451113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.460252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.460275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.469490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.469515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.478624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.478657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.487788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.487812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.497147] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.497173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.506373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.506398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.278 [2024-04-23 21:21:12.515839] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.278 [2024-04-23 21:21:12.515867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.279 [2024-04-23 21:21:12.525775] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.279 [2024-04-23 21:21:12.525801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.279 [2024-04-23 21:21:12.534591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.279 [2024-04-23 21:21:12.534616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.279 [2024-04-23 21:21:12.543749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.279 [2024-04-23 21:21:12.543773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.553066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.553091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.562395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.562419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.572242] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.572267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.581681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.581707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.591109] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.591133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.600391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.600419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.609513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.609543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.618248] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.618273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.627676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.627701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.637005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.637031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.646291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.646316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.655515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.655541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.540 [2024-04-23 21:21:12.664725] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.540 [2024-04-23 21:21:12.664749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.674141] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.674168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.683372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.683397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.692486] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.692512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.701870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.701896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.711079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.711104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.720392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.720419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.729625] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.729654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.739440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.739467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.748856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.748883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.757640] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.757665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.766875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.766901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.776088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.776114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.785379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.785405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.793979] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.794003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.803098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.803125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.541 [2024-04-23 21:21:12.812287] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.541 [2024-04-23 21:21:12.812312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.802 [2024-04-23 21:21:12.821500] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.821528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.831266] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.831290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.840389] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.840413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.849780] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.849807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.859015] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.859041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.867621] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.867652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.876951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.876978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.885727] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.885754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.894899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.894925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.904134] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.904161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.913484] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.913509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.923251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.923277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.932100] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.932126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.941316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.941341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.950594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.950622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.959900] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.959925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.969265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.969291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.978609] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.978640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.988289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.988315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:12.996934] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:12.996959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.006751] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.006777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.015959] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.015985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.025110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.025136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.034223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.034252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.043989] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.044016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.053383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.053410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.062070] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.062095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.803 [2024-04-23 21:21:13.070721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:18.803 [2024-04-23 21:21:13.070746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.079858] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.079884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.089017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.089041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.098300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.098327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.107528] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.107553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.117108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.117139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.124983] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.125009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 00:20:19.064 Latency(us) 00:20:19.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.064 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:19.064 Nvme1n1 : 5.01 16599.03 129.68 0.00 0.00 7704.19 2311.01 20971.52 00:20:19.064 =================================================================================================================== 00:20:19.064 Total : 16599.03 129.68 0.00 0.00 7704.19 2311.01 20971.52 00:20:19.064 [2024-04-23 21:21:13.132321] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.132346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.140340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.140363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.148329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.148346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.156324] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.156339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.164331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.164346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.172318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.172332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.180343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.180358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.188331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.188347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.196320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.196334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.204334] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.204349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.212324] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.212338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.220339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.220353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.228340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.228354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.236347] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.236360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.244354] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.244369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.252347] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.252361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.064 [2024-04-23 21:21:13.260344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.064 [2024-04-23 21:21:13.260358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.268358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.268372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.276349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.276363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.284360] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.284375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.292359] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.292373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.300350] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.300364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.308361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.308376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.316365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.316380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.324359] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.324374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.065 [2024-04-23 21:21:13.332372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.065 [2024-04-23 21:21:13.332386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.340375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.323 [2024-04-23 21:21:13.340391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.348379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.323 [2024-04-23 21:21:13.348395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.356377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.323 [2024-04-23 21:21:13.356393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.364362] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.323 [2024-04-23 21:21:13.364377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.372378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.323 [2024-04-23 21:21:13.372392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.323 [2024-04-23 21:21:13.380382] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.380397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.388373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.388387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.396383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.396398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.404375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.404394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.412393] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.412408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.420391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.420406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.428388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.428404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.436403] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.436419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.444391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.444406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.452383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.452397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.460398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.460411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.468395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.468409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.476401] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.476416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.484405] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.484419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 [2024-04-23 21:21:13.492394] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.324 [2024-04-23 21:21:13.492408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.324 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1461667) - No such process 00:20:19.324 21:21:13 -- target/zcopy.sh@49 -- # wait 1461667 00:20:19.324 21:21:13 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:19.324 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.324 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.324 21:21:13 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:19.324 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.324 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 delay0 00:20:19.324 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.324 21:21:13 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:19.324 21:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.324 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 21:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.324 21:21:13 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:19.324 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.582 [2024-04-23 21:21:13.646559] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:26.156 Initializing NVMe Controllers 00:20:26.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:26.156 Initialization complete. Launching workers. 00:20:26.156 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 42 00:20:26.156 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 329, failed to submit 33 00:20:26.156 success 93, unsuccess 236, failed 0 00:20:26.156 21:21:19 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:26.156 21:21:19 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:26.156 21:21:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:26.156 21:21:19 -- nvmf/common.sh@117 -- # sync 00:20:26.156 21:21:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.156 21:21:19 -- nvmf/common.sh@120 -- # set +e 00:20:26.156 21:21:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.156 21:21:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.156 rmmod nvme_tcp 00:20:26.156 rmmod nvme_fabrics 00:20:26.156 rmmod nvme_keyring 00:20:26.156 21:21:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.156 21:21:19 -- nvmf/common.sh@124 -- # set -e 00:20:26.156 21:21:19 -- nvmf/common.sh@125 -- # return 0 00:20:26.156 21:21:19 -- nvmf/common.sh@478 -- # '[' -n 1458736 ']' 00:20:26.156 21:21:19 -- nvmf/common.sh@479 -- # killprocess 1458736 00:20:26.156 21:21:19 -- common/autotest_common.sh@936 -- # '[' -z 1458736 ']' 00:20:26.156 21:21:19 -- common/autotest_common.sh@940 -- # kill -0 1458736 00:20:26.156 21:21:19 -- common/autotest_common.sh@941 -- # uname 00:20:26.156 21:21:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.156 21:21:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1458736 00:20:26.156 21:21:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:26.156 21:21:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:26.156 21:21:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1458736' 00:20:26.156 killing process with pid 1458736 00:20:26.156 21:21:19 -- common/autotest_common.sh@955 -- # kill 1458736 00:20:26.156 21:21:19 -- common/autotest_common.sh@960 -- # wait 1458736 00:20:26.156 21:21:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:26.156 21:21:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:26.156 21:21:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:26.156 21:21:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.156 21:21:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.156 21:21:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.156 21:21:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.156 21:21:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.696 21:21:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.696 00:20:28.696 real 0m32.425s 00:20:28.696 user 0m45.305s 00:20:28.696 sys 0m9.258s 00:20:28.696 21:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:28.696 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:28.696 ************************************ 00:20:28.696 END TEST nvmf_zcopy 00:20:28.696 ************************************ 00:20:28.696 21:21:22 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:28.696 21:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:28.696 21:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.696 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:28.696 ************************************ 00:20:28.696 START TEST nvmf_nmic 00:20:28.696 ************************************ 00:20:28.696 21:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:28.696 * Looking for test storage... 00:20:28.696 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:28.696 21:21:22 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.696 21:21:22 -- nvmf/common.sh@7 -- # uname -s 00:20:28.696 21:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.696 21:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.696 21:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.696 21:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.696 21:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.696 21:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.696 21:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.696 21:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.696 21:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.696 21:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.696 21:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:28.696 21:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:28.696 21:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.696 21:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.696 21:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:28.696 21:21:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.696 21:21:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:28.696 21:21:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.696 21:21:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.696 21:21:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.696 21:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.696 21:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.696 21:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.696 21:21:22 -- paths/export.sh@5 -- # export PATH 00:20:28.696 21:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.696 21:21:22 -- nvmf/common.sh@47 -- # : 0 00:20:28.696 21:21:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.696 21:21:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.696 21:21:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.696 21:21:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.696 21:21:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.696 21:21:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.696 21:21:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.696 21:21:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.696 21:21:22 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.696 21:21:22 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.696 21:21:22 -- target/nmic.sh@14 -- # nvmftestinit 00:20:28.696 21:21:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:28.696 21:21:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.696 21:21:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:28.696 21:21:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:28.696 21:21:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:28.696 21:21:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.696 21:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.696 21:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.696 21:21:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:28.696 21:21:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:28.696 21:21:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.696 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:20:33.977 21:21:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:33.977 21:21:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.977 21:21:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.977 21:21:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.977 21:21:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.977 21:21:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.977 21:21:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.977 21:21:28 -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.977 21:21:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.977 21:21:28 -- nvmf/common.sh@296 -- # e810=() 00:20:33.977 21:21:28 -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.977 21:21:28 -- nvmf/common.sh@297 -- # x722=() 00:20:33.977 21:21:28 -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.977 21:21:28 -- nvmf/common.sh@298 -- # mlx=() 00:20:33.977 21:21:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.977 21:21:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.977 21:21:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.977 21:21:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.977 21:21:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.977 21:21:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:33.977 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:33.977 21:21:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.977 21:21:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:33.977 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:33.977 21:21:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.977 21:21:28 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:33.977 21:21:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.977 21:21:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.977 21:21:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:33.977 21:21:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.977 21:21:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:33.977 Found net devices under 0000:27:00.0: cvl_0_0 00:20:33.977 21:21:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.977 21:21:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.977 21:21:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.977 21:21:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:33.977 21:21:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.977 21:21:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:33.977 Found net devices under 0000:27:00.1: cvl_0_1 00:20:33.977 21:21:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.977 21:21:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:33.977 21:21:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:33.977 21:21:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:33.978 21:21:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:33.978 21:21:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:33.978 21:21:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.978 21:21:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.978 21:21:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.978 21:21:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.978 21:21:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.978 21:21:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.978 21:21:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.978 21:21:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.978 21:21:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.978 21:21:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.978 21:21:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.978 21:21:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.978 21:21:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.236 21:21:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.236 21:21:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.236 21:21:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.236 21:21:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.236 21:21:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.236 21:21:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.236 21:21:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:20:34.236 00:20:34.236 --- 10.0.0.2 ping statistics --- 00:20:34.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.236 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:20:34.236 21:21:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:20:34.236 00:20:34.236 --- 10.0.0.1 ping statistics --- 00:20:34.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.236 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:34.236 21:21:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.236 21:21:28 -- nvmf/common.sh@411 -- # return 0 00:20:34.236 21:21:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.236 21:21:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.236 21:21:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.236 21:21:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.236 21:21:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.236 21:21:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.236 21:21:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.236 21:21:28 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:34.236 21:21:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.236 21:21:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.236 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:20:34.236 21:21:28 -- nvmf/common.sh@470 -- # nvmfpid=1467917 00:20:34.236 21:21:28 -- nvmf/common.sh@471 -- # waitforlisten 1467917 00:20:34.236 21:21:28 -- common/autotest_common.sh@817 -- # '[' -z 1467917 ']' 00:20:34.236 21:21:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:34.236 21:21:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.236 21:21:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.236 21:21:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.236 21:21:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.236 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:20:34.494 [2024-04-23 21:21:28.531186] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:20:34.494 [2024-04-23 21:21:28.531288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.494 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.494 [2024-04-23 21:21:28.654534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.494 [2024-04-23 21:21:28.757216] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.494 [2024-04-23 21:21:28.757265] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.494 [2024-04-23 21:21:28.757283] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.494 [2024-04-23 21:21:28.757295] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.494 [2024-04-23 21:21:28.757307] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.494 [2024-04-23 21:21:28.757375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.494 [2024-04-23 21:21:28.757494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.494 [2024-04-23 21:21:28.757604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.494 [2024-04-23 21:21:28.757614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.062 21:21:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.062 21:21:29 -- common/autotest_common.sh@850 -- # return 0 00:20:35.062 21:21:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.062 21:21:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.062 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.062 21:21:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.062 21:21:29 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.062 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.062 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.062 [2024-04-23 21:21:29.282753] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.062 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.062 21:21:29 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.062 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.062 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.062 Malloc0 00:20:35.062 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.062 21:21:29 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:35.062 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.062 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.323 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.323 21:21:29 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.323 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.323 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.323 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.324 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.324 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.324 [2024-04-23 21:21:29.352181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.324 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:35.324 test case1: single bdev can't be used in multiple subsystems 00:20:35.324 21:21:29 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:35.324 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.324 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.324 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.324 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.324 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.324 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@28 -- # nmic_status=0 00:20:35.324 21:21:29 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:35.324 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.324 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.324 [2024-04-23 21:21:29.375953] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:35.324 [2024-04-23 21:21:29.375982] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:35.324 [2024-04-23 21:21:29.375994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:35.324 request: 00:20:35.324 { 00:20:35.324 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.324 "namespace": { 00:20:35.324 "bdev_name": "Malloc0", 00:20:35.324 "no_auto_visible": false 00:20:35.324 }, 00:20:35.324 "method": "nvmf_subsystem_add_ns", 00:20:35.324 "req_id": 1 00:20:35.324 } 00:20:35.324 Got JSON-RPC error response 00:20:35.324 response: 00:20:35.324 { 00:20:35.324 "code": -32602, 00:20:35.324 "message": "Invalid parameters" 00:20:35.324 } 00:20:35.324 21:21:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@29 -- # nmic_status=1 00:20:35.324 21:21:29 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:35.324 21:21:29 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:35.324 Adding namespace failed - expected result. 00:20:35.324 21:21:29 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:35.324 test case2: host connect to nvmf target in multiple paths 00:20:35.324 21:21:29 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:35.324 21:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.324 21:21:29 -- common/autotest_common.sh@10 -- # set +x 00:20:35.324 [2024-04-23 21:21:29.388115] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:35.324 21:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.324 21:21:29 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:36.709 21:21:30 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:38.095 21:21:32 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:38.095 21:21:32 -- common/autotest_common.sh@1184 -- # local i=0 00:20:38.095 21:21:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:38.095 21:21:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:38.095 21:21:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:40.629 21:21:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:40.629 21:21:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:40.629 21:21:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:40.629 21:21:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:40.629 21:21:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:40.629 21:21:34 -- common/autotest_common.sh@1194 -- # return 0 00:20:40.629 21:21:34 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:40.629 [global] 00:20:40.629 thread=1 00:20:40.629 invalidate=1 00:20:40.629 rw=write 00:20:40.629 time_based=1 00:20:40.629 runtime=1 00:20:40.629 ioengine=libaio 00:20:40.629 direct=1 00:20:40.629 bs=4096 00:20:40.629 iodepth=1 00:20:40.629 norandommap=0 00:20:40.629 numjobs=1 00:20:40.629 00:20:40.629 verify_dump=1 00:20:40.629 verify_backlog=512 00:20:40.629 verify_state_save=0 00:20:40.629 do_verify=1 00:20:40.629 verify=crc32c-intel 00:20:40.629 [job0] 00:20:40.629 filename=/dev/nvme0n1 00:20:40.629 Could not set queue depth (nvme0n1) 00:20:40.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:40.629 fio-3.35 00:20:40.629 Starting 1 thread 00:20:42.007 00:20:42.007 job0: (groupid=0, jobs=1): err= 0: pid=1469286: Tue Apr 23 21:21:35 2024 00:20:42.007 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:20:42.007 slat (nsec): min=6391, max=34496, avg=29063.05, stdev=7515.82 00:20:42.007 clat (usec): min=41021, max=42109, avg=41924.25, stdev=217.51 00:20:42.007 lat (usec): min=41052, max=42137, avg=41953.32, stdev=215.84 00:20:42.007 clat percentiles (usec): 00:20:42.007 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:42.007 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:42.007 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:42.007 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:42.007 | 99.99th=[42206] 00:20:42.007 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:20:42.007 slat (nsec): min=4857, max=53565, avg=7553.91, stdev=5315.13 00:20:42.007 clat (usec): min=171, max=524, avg=213.68, stdev=41.22 00:20:42.007 lat (usec): min=177, max=578, avg=221.24, stdev=45.01 00:20:42.007 clat percentiles (usec): 00:20:42.007 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 192], 00:20:42.007 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:20:42.007 | 70.00th=[ 204], 80.00th=[ 235], 90.00th=[ 269], 95.00th=[ 314], 00:20:42.007 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 529], 99.95th=[ 529], 00:20:42.007 | 99.99th=[ 529] 00:20:42.007 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:42.007 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:42.007 lat (usec) : 250=83.71%, 500=11.99%, 750=0.19% 00:20:42.007 lat (msec) : 50=4.12% 00:20:42.007 cpu : usr=0.19%, sys=0.39%, ctx=534, majf=0, minf=1 00:20:42.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.007 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.007 00:20:42.007 Run status group 0 (all jobs): 00:20:42.007 READ: bw=84.9KiB/s (86.9kB/s), 84.9KiB/s-84.9KiB/s (86.9kB/s-86.9kB/s), io=88.0KiB (90.1kB), run=1037-1037msec 00:20:42.007 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:20:42.007 00:20:42.007 Disk stats (read/write): 00:20:42.007 nvme0n1: ios=68/512, merge=0/0, ticks=815/111, in_queue=926, util=92.89% 00:20:42.007 21:21:35 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:42.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:42.265 21:21:36 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:42.265 21:21:36 -- common/autotest_common.sh@1205 -- # local i=0 00:20:42.265 21:21:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:42.265 21:21:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:42.265 21:21:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:42.265 21:21:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:42.265 21:21:36 -- common/autotest_common.sh@1217 -- # return 0 00:20:42.265 21:21:36 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:42.265 21:21:36 -- target/nmic.sh@53 -- # nvmftestfini 00:20:42.265 21:21:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:42.265 21:21:36 -- nvmf/common.sh@117 -- # sync 00:20:42.265 21:21:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.265 21:21:36 -- nvmf/common.sh@120 -- # set +e 00:20:42.265 21:21:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.265 21:21:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.265 rmmod nvme_tcp 00:20:42.265 rmmod nvme_fabrics 00:20:42.265 rmmod nvme_keyring 00:20:42.265 21:21:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.265 21:21:36 -- nvmf/common.sh@124 -- # set -e 00:20:42.265 21:21:36 -- nvmf/common.sh@125 -- # return 0 00:20:42.265 21:21:36 -- nvmf/common.sh@478 -- # '[' -n 1467917 ']' 00:20:42.265 21:21:36 -- nvmf/common.sh@479 -- # killprocess 1467917 00:20:42.265 21:21:36 -- common/autotest_common.sh@936 -- # '[' -z 1467917 ']' 00:20:42.265 21:21:36 -- common/autotest_common.sh@940 -- # kill -0 1467917 00:20:42.265 21:21:36 -- common/autotest_common.sh@941 -- # uname 00:20:42.265 21:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.265 21:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1467917 00:20:42.265 21:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:42.265 21:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:42.265 21:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1467917' 00:20:42.265 killing process with pid 1467917 00:20:42.265 21:21:36 -- common/autotest_common.sh@955 -- # kill 1467917 00:20:42.266 21:21:36 -- common/autotest_common.sh@960 -- # wait 1467917 00:20:42.832 21:21:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:42.832 21:21:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:42.832 21:21:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:42.832 21:21:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.832 21:21:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.832 21:21:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.832 21:21:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.832 21:21:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.780 21:21:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.780 00:20:44.780 real 0m16.495s 00:20:44.780 user 0m48.779s 00:20:44.780 sys 0m5.085s 00:20:44.780 21:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:44.780 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:20:44.780 ************************************ 00:20:44.780 END TEST nvmf_nmic 00:20:44.780 ************************************ 00:20:44.780 21:21:39 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:44.780 21:21:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:44.780 21:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:44.780 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:20:45.119 ************************************ 00:20:45.119 START TEST nvmf_fio_target 00:20:45.119 ************************************ 00:20:45.119 21:21:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:45.119 * Looking for test storage... 00:20:45.119 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:20:45.119 21:21:39 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.119 21:21:39 -- nvmf/common.sh@7 -- # uname -s 00:20:45.119 21:21:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.119 21:21:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.119 21:21:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.119 21:21:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.119 21:21:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.119 21:21:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.119 21:21:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.119 21:21:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.119 21:21:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.119 21:21:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.119 21:21:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:45.119 21:21:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:20:45.119 21:21:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.119 21:21:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.119 21:21:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:45.119 21:21:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.119 21:21:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:45.119 21:21:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.119 21:21:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.119 21:21:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.119 21:21:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.119 21:21:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.119 21:21:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.119 21:21:39 -- paths/export.sh@5 -- # export PATH 00:20:45.119 21:21:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.119 21:21:39 -- nvmf/common.sh@47 -- # : 0 00:20:45.119 21:21:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.119 21:21:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.119 21:21:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.119 21:21:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.119 21:21:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.119 21:21:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.119 21:21:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.119 21:21:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.119 21:21:39 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.119 21:21:39 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.119 21:21:39 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:20:45.119 21:21:39 -- target/fio.sh@16 -- # nvmftestinit 00:20:45.119 21:21:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:45.119 21:21:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.119 21:21:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:45.119 21:21:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:45.119 21:21:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:45.119 21:21:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.119 21:21:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.119 21:21:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.119 21:21:39 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:45.119 21:21:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:45.119 21:21:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.119 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:20:51.698 21:21:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:51.698 21:21:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.698 21:21:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.698 21:21:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.698 21:21:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.698 21:21:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.698 21:21:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.698 21:21:44 -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.698 21:21:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.698 21:21:44 -- nvmf/common.sh@296 -- # e810=() 00:20:51.698 21:21:44 -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.698 21:21:44 -- nvmf/common.sh@297 -- # x722=() 00:20:51.698 21:21:44 -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.698 21:21:44 -- nvmf/common.sh@298 -- # mlx=() 00:20:51.698 21:21:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.698 21:21:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.698 21:21:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.698 21:21:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.698 21:21:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:51.698 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:51.698 21:21:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.698 21:21:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:51.698 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:51.698 21:21:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.698 21:21:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.698 21:21:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.698 21:21:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:51.698 Found net devices under 0000:27:00.0: cvl_0_0 00:20:51.698 21:21:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.698 21:21:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.698 21:21:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.698 21:21:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.698 21:21:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:51.698 Found net devices under 0000:27:00.1: cvl_0_1 00:20:51.698 21:21:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.698 21:21:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:51.698 21:21:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:51.698 21:21:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:51.698 21:21:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.698 21:21:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.698 21:21:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.698 21:21:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.698 21:21:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.698 21:21:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.698 21:21:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.699 21:21:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.699 21:21:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.699 21:21:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.699 21:21:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.699 21:21:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.699 21:21:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.699 21:21:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.699 21:21:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.699 21:21:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.699 21:21:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.699 21:21:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.699 21:21:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.699 21:21:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:20:51.699 00:20:51.699 --- 10.0.0.2 ping statistics --- 00:20:51.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.699 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:20:51.699 21:21:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.622 ms 00:20:51.699 00:20:51.699 --- 10.0.0.1 ping statistics --- 00:20:51.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.699 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:20:51.699 21:21:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.699 21:21:45 -- nvmf/common.sh@411 -- # return 0 00:20:51.699 21:21:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:51.699 21:21:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.699 21:21:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:51.699 21:21:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:51.699 21:21:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.699 21:21:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:51.699 21:21:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:51.699 21:21:45 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:51.699 21:21:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:51.699 21:21:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:51.699 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:20:51.699 21:21:45 -- nvmf/common.sh@470 -- # nvmfpid=1473757 00:20:51.699 21:21:45 -- nvmf/common.sh@471 -- # waitforlisten 1473757 00:20:51.699 21:21:45 -- common/autotest_common.sh@817 -- # '[' -z 1473757 ']' 00:20:51.699 21:21:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.699 21:21:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.699 21:21:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.699 21:21:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.699 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:20:51.699 21:21:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:51.699 [2024-04-23 21:21:45.316539] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:20:51.699 [2024-04-23 21:21:45.316687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.699 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.699 [2024-04-23 21:21:45.474555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.699 [2024-04-23 21:21:45.586063] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.699 [2024-04-23 21:21:45.586114] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.699 [2024-04-23 21:21:45.586128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.699 [2024-04-23 21:21:45.586139] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.699 [2024-04-23 21:21:45.586148] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.699 [2024-04-23 21:21:45.586226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.699 [2024-04-23 21:21:45.586334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.699 [2024-04-23 21:21:45.586440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.699 [2024-04-23 21:21:45.586452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.960 21:21:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:51.960 21:21:46 -- common/autotest_common.sh@850 -- # return 0 00:20:51.960 21:21:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:51.960 21:21:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:51.960 21:21:46 -- common/autotest_common.sh@10 -- # set +x 00:20:51.960 21:21:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.960 21:21:46 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.960 [2024-04-23 21:21:46.207717] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.220 21:21:46 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.220 21:21:46 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:52.220 21:21:46 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.481 21:21:46 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:52.481 21:21:46 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.742 21:21:46 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:52.742 21:21:46 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.742 21:21:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:52.742 21:21:46 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:53.002 21:21:47 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.263 21:21:47 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:53.263 21:21:47 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.263 21:21:47 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:53.263 21:21:47 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.523 21:21:47 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:53.523 21:21:47 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:53.781 21:21:47 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:53.781 21:21:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:53.781 21:21:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.041 21:21:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:54.041 21:21:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:54.041 21:21:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.303 [2024-04-23 21:21:48.357947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.303 21:21:48 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:54.303 21:21:48 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:54.565 21:21:48 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:55.943 21:21:50 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:55.943 21:21:50 -- common/autotest_common.sh@1184 -- # local i=0 00:20:55.943 21:21:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.943 21:21:50 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:20:55.944 21:21:50 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:20:55.944 21:21:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:57.852 21:21:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:57.852 21:21:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:57.852 21:21:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:58.110 21:21:52 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:20:58.111 21:21:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.111 21:21:52 -- common/autotest_common.sh@1194 -- # return 0 00:20:58.111 21:21:52 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:58.111 [global] 00:20:58.111 thread=1 00:20:58.111 invalidate=1 00:20:58.111 rw=write 00:20:58.111 time_based=1 00:20:58.111 runtime=1 00:20:58.111 ioengine=libaio 00:20:58.111 direct=1 00:20:58.111 bs=4096 00:20:58.111 iodepth=1 00:20:58.111 norandommap=0 00:20:58.111 numjobs=1 00:20:58.111 00:20:58.111 verify_dump=1 00:20:58.111 verify_backlog=512 00:20:58.111 verify_state_save=0 00:20:58.111 do_verify=1 00:20:58.111 verify=crc32c-intel 00:20:58.111 [job0] 00:20:58.111 filename=/dev/nvme0n1 00:20:58.111 [job1] 00:20:58.111 filename=/dev/nvme0n2 00:20:58.111 [job2] 00:20:58.111 filename=/dev/nvme0n3 00:20:58.111 [job3] 00:20:58.111 filename=/dev/nvme0n4 00:20:58.111 Could not set queue depth (nvme0n1) 00:20:58.111 Could not set queue depth (nvme0n2) 00:20:58.111 Could not set queue depth (nvme0n3) 00:20:58.111 Could not set queue depth (nvme0n4) 00:20:58.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.370 fio-3.35 00:20:58.370 Starting 4 threads 00:20:59.763 00:20:59.763 job0: (groupid=0, jobs=1): err= 0: pid=1475416: Tue Apr 23 21:21:53 2024 00:20:59.763 read: IOPS=19, BW=79.9KiB/s (81.8kB/s)(80.0KiB/1001msec) 00:20:59.763 slat (nsec): min=7485, max=43000, avg=33092.90, stdev=6972.17 00:20:59.763 clat (usec): min=954, max=42063, avg=39852.06, stdev=9157.19 00:20:59.763 lat (usec): min=986, max=42100, avg=39885.16, stdev=9157.45 00:20:59.763 clat percentiles (usec): 00:20:59.763 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41157], 20.00th=[41681], 00:20:59.763 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:20:59.763 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:59.763 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:59.763 | 99.99th=[42206] 00:20:59.763 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:59.763 slat (nsec): min=5961, max=67855, avg=28185.21, stdev=15041.74 00:20:59.763 clat (usec): min=190, max=733, avg=361.49, stdev=96.94 00:20:59.763 lat (usec): min=205, max=801, avg=389.68, stdev=102.06 00:20:59.763 clat percentiles (usec): 00:20:59.763 | 1.00th=[ 200], 5.00th=[ 221], 10.00th=[ 253], 20.00th=[ 277], 00:20:59.763 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 371], 00:20:59.763 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 494], 95.00th=[ 545], 00:20:59.763 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 734], 99.95th=[ 734], 00:20:59.763 | 99.99th=[ 734] 00:20:59.763 bw ( KiB/s): min= 4096, max= 4096, per=21.52%, avg=4096.00, stdev= 0.00, samples=1 00:20:59.763 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:59.763 lat (usec) : 250=9.21%, 500=77.63%, 750=9.40%, 1000=0.19% 00:20:59.763 lat (msec) : 50=3.57% 00:20:59.763 cpu : usr=1.00%, sys=1.70%, ctx=533, majf=0, minf=1 00:20:59.763 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.763 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.763 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.763 job1: (groupid=0, jobs=1): err= 0: pid=1475429: Tue Apr 23 21:21:53 2024 00:20:59.763 read: IOPS=19, BW=79.9KiB/s (81.8kB/s)(80.0KiB/1001msec) 00:20:59.763 slat (nsec): min=5954, max=45791, avg=13215.15, stdev=13997.18 00:20:59.763 clat (usec): min=41102, max=42211, avg=41890.28, stdev=273.75 00:20:59.763 lat (usec): min=41148, max=42217, avg=41903.49, stdev=268.71 00:20:59.763 clat percentiles (usec): 00:20:59.763 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:20:59.763 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:59.763 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:59.763 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:59.764 | 99.99th=[42206] 00:20:59.764 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:59.764 slat (nsec): min=5875, max=59501, avg=10916.48, stdev=3718.02 00:20:59.764 clat (usec): min=192, max=682, avg=304.14, stdev=80.18 00:20:59.764 lat (usec): min=200, max=742, avg=315.06, stdev=80.87 00:20:59.764 clat percentiles (usec): 00:20:59.764 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 239], 00:20:59.764 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:20:59.764 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 429], 95.00th=[ 469], 00:20:59.764 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 685], 99.95th=[ 685], 00:20:59.764 | 99.99th=[ 685] 00:20:59.764 bw ( KiB/s): min= 4096, max= 4096, per=21.52%, avg=4096.00, stdev= 0.00, samples=1 00:20:59.764 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:59.764 lat (usec) : 250=22.74%, 500=70.86%, 750=2.63% 00:20:59.764 lat (msec) : 50=3.76% 00:20:59.764 cpu : usr=0.10%, sys=0.90%, ctx=532, majf=0, minf=1 00:20:59.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.764 job2: (groupid=0, jobs=1): err= 0: pid=1475450: Tue Apr 23 21:21:53 2024 00:20:59.764 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:59.764 slat (nsec): min=4038, max=48420, avg=8934.57, stdev=7873.94 00:20:59.764 clat (usec): min=235, max=752, avg=340.48, stdev=65.74 00:20:59.764 lat (usec): min=241, max=757, avg=349.42, stdev=71.11 00:20:59.764 clat percentiles (usec): 00:20:59.764 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:20:59.764 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:20:59.764 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 424], 95.00th=[ 474], 00:20:59.764 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 668], 99.95th=[ 750], 00:20:59.764 | 99.99th=[ 750] 00:20:59.764 write: IOPS=1962, BW=7848KiB/s (8037kB/s)(7856KiB/1001msec); 0 zone resets 00:20:59.764 slat (nsec): min=5467, max=53227, avg=11631.51, stdev=10561.42 00:20:59.764 clat (usec): min=141, max=768, avg=219.32, stdev=62.36 00:20:59.764 lat (usec): min=148, max=781, avg=230.95, stdev=70.08 00:20:59.764 clat percentiles (usec): 00:20:59.764 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:20:59.764 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 212], 00:20:59.764 | 70.00th=[ 223], 80.00th=[ 243], 90.00th=[ 326], 95.00th=[ 355], 00:20:59.764 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 709], 99.95th=[ 766], 00:20:59.764 | 99.99th=[ 766] 00:20:59.764 bw ( KiB/s): min= 8192, max= 8192, per=43.04%, avg=8192.00, stdev= 0.00, samples=1 00:20:59.764 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:59.764 lat (usec) : 250=46.43%, 500=51.60%, 750=1.91%, 1000=0.06% 00:20:59.764 cpu : usr=1.90%, sys=5.50%, ctx=3501, majf=0, minf=1 00:20:59.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 issued rwts: total=1536,1964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.764 job3: (groupid=0, jobs=1): err= 0: pid=1475457: Tue Apr 23 21:21:53 2024 00:20:59.764 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:59.764 slat (nsec): min=4202, max=44739, avg=9406.31, stdev=7110.07 00:20:59.764 clat (usec): min=270, max=711, avg=349.20, stdev=52.45 00:20:59.764 lat (usec): min=274, max=716, avg=358.61, stdev=57.48 00:20:59.764 clat percentiles (usec): 00:20:59.764 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:20:59.764 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:20:59.764 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 441], 95.00th=[ 474], 00:20:59.764 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 635], 99.95th=[ 709], 00:20:59.764 | 99.99th=[ 709] 00:20:59.764 write: IOPS=1773, BW=7093KiB/s (7263kB/s)(7100KiB/1001msec); 0 zone resets 00:20:59.764 slat (nsec): min=5493, max=59386, avg=12204.67, stdev=10038.24 00:20:59.764 clat (usec): min=164, max=1044, avg=235.15, stdev=62.61 00:20:59.764 lat (usec): min=171, max=1104, avg=247.35, stdev=69.82 00:20:59.764 clat percentiles (usec): 00:20:59.764 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:20:59.764 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 223], 00:20:59.764 | 70.00th=[ 237], 80.00th=[ 269], 90.00th=[ 338], 95.00th=[ 367], 00:20:59.764 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 578], 99.95th=[ 1045], 00:20:59.764 | 99.99th=[ 1045] 00:20:59.764 bw ( KiB/s): min= 8192, max= 8192, per=43.04%, avg=8192.00, stdev= 0.00, samples=1 00:20:59.764 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:59.764 lat (usec) : 250=40.05%, 500=58.92%, 750=1.00% 00:20:59.764 lat (msec) : 2=0.03% 00:20:59.764 cpu : usr=1.40%, sys=4.30%, ctx=3311, majf=0, minf=1 00:20:59.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.764 issued rwts: total=1536,1775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.764 00:20:59.764 Run status group 0 (all jobs): 00:20:59.764 READ: bw=12.1MiB/s (12.7MB/s), 79.9KiB/s-6138KiB/s (81.8kB/s-6285kB/s), io=12.2MiB (12.7MB), run=1001-1001msec 00:20:59.764 WRITE: bw=18.6MiB/s (19.5MB/s), 2046KiB/s-7848KiB/s (2095kB/s-8037kB/s), io=18.6MiB (19.5MB), run=1001-1001msec 00:20:59.764 00:20:59.764 Disk stats (read/write): 00:20:59.764 nvme0n1: ios=41/512, merge=0/0, ticks=1555/155, in_queue=1710, util=96.39% 00:20:59.764 nvme0n2: ios=63/512, merge=0/0, ticks=751/154, in_queue=905, util=94.57% 00:20:59.764 nvme0n3: ios=1259/1536, merge=0/0, ticks=1366/319, in_queue=1685, util=96.71% 00:20:59.764 nvme0n4: ios=1195/1536, merge=0/0, ticks=669/363, in_queue=1032, util=89.58% 00:20:59.764 21:21:53 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:59.764 [global] 00:20:59.764 thread=1 00:20:59.764 invalidate=1 00:20:59.764 rw=randwrite 00:20:59.764 time_based=1 00:20:59.764 runtime=1 00:20:59.764 ioengine=libaio 00:20:59.764 direct=1 00:20:59.764 bs=4096 00:20:59.764 iodepth=1 00:20:59.764 norandommap=0 00:20:59.764 numjobs=1 00:20:59.764 00:20:59.764 verify_dump=1 00:20:59.764 verify_backlog=512 00:20:59.764 verify_state_save=0 00:20:59.764 do_verify=1 00:20:59.764 verify=crc32c-intel 00:20:59.764 [job0] 00:20:59.764 filename=/dev/nvme0n1 00:20:59.764 [job1] 00:20:59.764 filename=/dev/nvme0n2 00:20:59.764 [job2] 00:20:59.764 filename=/dev/nvme0n3 00:20:59.764 [job3] 00:20:59.764 filename=/dev/nvme0n4 00:20:59.764 Could not set queue depth (nvme0n1) 00:20:59.764 Could not set queue depth (nvme0n2) 00:20:59.764 Could not set queue depth (nvme0n3) 00:20:59.764 Could not set queue depth (nvme0n4) 00:21:00.025 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.025 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.025 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.025 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.025 fio-3.35 00:21:00.025 Starting 4 threads 00:21:01.403 00:21:01.403 job0: (groupid=0, jobs=1): err= 0: pid=1475926: Tue Apr 23 21:21:55 2024 00:21:01.403 read: IOPS=1486, BW=5946KiB/s (6089kB/s)(5952KiB/1001msec) 00:21:01.403 slat (nsec): min=3968, max=50909, avg=11300.35, stdev=8026.14 00:21:01.403 clat (usec): min=283, max=593, avg=402.27, stdev=76.22 00:21:01.403 lat (usec): min=288, max=618, avg=413.57, stdev=81.85 00:21:01.403 clat percentiles (usec): 00:21:01.403 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 322], 00:21:01.403 | 30.00th=[ 347], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 404], 00:21:01.403 | 70.00th=[ 429], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 537], 00:21:01.403 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 594], 99.95th=[ 594], 00:21:01.403 | 99.99th=[ 594] 00:21:01.403 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:21:01.403 slat (nsec): min=5352, max=63168, avg=12801.62, stdev=10808.45 00:21:01.403 clat (usec): min=169, max=704, avg=230.75, stdev=50.16 00:21:01.403 lat (usec): min=176, max=721, avg=243.55, stdev=57.08 00:21:01.403 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:21:01.404 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:21:01.404 | 70.00th=[ 231], 80.00th=[ 258], 90.00th=[ 314], 95.00th=[ 322], 00:21:01.404 | 99.00th=[ 355], 99.50th=[ 424], 99.90th=[ 676], 99.95th=[ 701], 00:21:01.404 | 99.99th=[ 701] 00:21:01.404 bw ( KiB/s): min= 7176, max= 7176, per=60.79%, avg=7176.00, stdev= 0.00, samples=1 00:21:01.404 iops : min= 1794, max= 1794, avg=1794.00, stdev= 0.00, samples=1 00:21:01.404 lat (usec) : 250=39.95%, 500=51.72%, 750=8.33% 00:21:01.404 cpu : usr=1.80%, sys=3.80%, ctx=3026, majf=0, minf=1 00:21:01.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 issued rwts: total=1488,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.404 job1: (groupid=0, jobs=1): err= 0: pid=1475927: Tue Apr 23 21:21:55 2024 00:21:01.404 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:21:01.404 slat (nsec): min=8499, max=47251, avg=33592.70, stdev=7926.99 00:21:01.404 clat (usec): min=1093, max=42025, avg=40022.79, stdev=8492.79 00:21:01.404 lat (usec): min=1141, max=42059, avg=40056.39, stdev=8489.94 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 1090], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:21:01.404 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:21:01.404 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:01.404 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.404 | 99.99th=[42206] 00:21:01.404 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:21:01.404 slat (nsec): min=6082, max=54770, avg=8207.12, stdev=3155.80 00:21:01.404 clat (usec): min=171, max=711, avg=223.41, stdev=38.70 00:21:01.404 lat (usec): min=178, max=765, avg=231.62, stdev=40.42 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:21:01.404 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:21:01.404 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 265], 00:21:01.404 | 99.00th=[ 347], 99.50th=[ 469], 99.90th=[ 709], 99.95th=[ 709], 00:21:01.404 | 99.99th=[ 709] 00:21:01.404 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.404 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.404 lat (usec) : 250=84.30%, 500=11.03%, 750=0.37% 00:21:01.404 lat (msec) : 2=0.19%, 50=4.11% 00:21:01.404 cpu : usr=0.10%, sys=0.58%, ctx=536, majf=0, minf=1 00:21:01.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.404 job2: (groupid=0, jobs=1): err= 0: pid=1475928: Tue Apr 23 21:21:55 2024 00:21:01.404 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:21:01.404 slat (nsec): min=9096, max=45030, avg=33593.43, stdev=9380.45 00:21:01.404 clat (usec): min=850, max=42058, avg=38319.07, stdev=11760.66 00:21:01.404 lat (usec): min=892, max=42079, avg=38352.67, stdev=11758.73 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 848], 5.00th=[ 1254], 10.00th=[41157], 20.00th=[41681], 00:21:01.404 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:21:01.404 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:01.404 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.404 | 99.99th=[42206] 00:21:01.404 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:21:01.404 slat (nsec): min=6332, max=79160, avg=8465.31, stdev=3871.65 00:21:01.404 clat (usec): min=180, max=915, avg=225.26, stdev=48.00 00:21:01.404 lat (usec): min=187, max=994, avg=233.73, stdev=50.75 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 204], 00:21:01.404 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:21:01.404 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 269], 00:21:01.404 | 99.00th=[ 420], 99.50th=[ 529], 99.90th=[ 914], 99.95th=[ 914], 00:21:01.404 | 99.99th=[ 914] 00:21:01.404 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.404 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.404 lat (usec) : 250=88.22%, 500=6.92%, 750=0.37%, 1000=0.37% 00:21:01.404 lat (msec) : 2=0.19%, 50=3.93% 00:21:01.404 cpu : usr=0.20%, sys=0.70%, ctx=535, majf=0, minf=1 00:21:01.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.404 job3: (groupid=0, jobs=1): err= 0: pid=1475929: Tue Apr 23 21:21:55 2024 00:21:01.404 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:21:01.404 slat (nsec): min=8283, max=40875, avg=31154.29, stdev=9148.23 00:21:01.404 clat (usec): min=41481, max=42034, avg=41924.99, stdev=116.09 00:21:01.404 lat (usec): min=41489, max=42063, avg=41956.14, stdev=120.29 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:21:01.404 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:21:01.404 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:01.404 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.404 | 99.99th=[42206] 00:21:01.404 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:21:01.404 slat (nsec): min=5856, max=55989, avg=8204.35, stdev=2782.83 00:21:01.404 clat (usec): min=188, max=574, avg=224.22, stdev=33.84 00:21:01.404 lat (usec): min=195, max=630, avg=232.42, stdev=35.33 00:21:01.404 clat percentiles (usec): 00:21:01.404 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:21:01.404 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:21:01.404 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 260], 00:21:01.404 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 578], 99.95th=[ 578], 00:21:01.404 | 99.99th=[ 578] 00:21:01.404 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.404 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.404 lat (usec) : 250=86.87%, 500=9.01%, 750=0.19% 00:21:01.404 lat (msec) : 50=3.94% 00:21:01.404 cpu : usr=0.20%, sys=0.40%, ctx=533, majf=0, minf=1 00:21:01.404 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.404 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.404 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.404 00:21:01.404 Run status group 0 (all jobs): 00:21:01.404 READ: bw=5975KiB/s (6118kB/s), 83.9KiB/s-5946KiB/s (85.9kB/s-6089kB/s), io=6220KiB (6369kB), run=1001-1041msec 00:21:01.404 WRITE: bw=11.5MiB/s (12.1MB/s), 1967KiB/s-6138KiB/s (2015kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1041msec 00:21:01.404 00:21:01.404 Disk stats (read/write): 00:21:01.404 nvme0n1: ios=1072/1502, merge=0/0, ticks=1322/337, in_queue=1659, util=84.57% 00:21:01.404 nvme0n2: ios=68/512, merge=0/0, ticks=799/115, in_queue=914, util=90.72% 00:21:01.404 nvme0n3: ios=76/512, merge=0/0, ticks=813/115, in_queue=928, util=94.95% 00:21:01.404 nvme0n4: ios=74/512, merge=0/0, ticks=845/115, in_queue=960, util=96.70% 00:21:01.404 21:21:55 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:01.404 [global] 00:21:01.404 thread=1 00:21:01.404 invalidate=1 00:21:01.404 rw=write 00:21:01.404 time_based=1 00:21:01.404 runtime=1 00:21:01.404 ioengine=libaio 00:21:01.404 direct=1 00:21:01.404 bs=4096 00:21:01.404 iodepth=128 00:21:01.404 norandommap=0 00:21:01.404 numjobs=1 00:21:01.404 00:21:01.404 verify_dump=1 00:21:01.404 verify_backlog=512 00:21:01.404 verify_state_save=0 00:21:01.404 do_verify=1 00:21:01.404 verify=crc32c-intel 00:21:01.404 [job0] 00:21:01.404 filename=/dev/nvme0n1 00:21:01.404 [job1] 00:21:01.404 filename=/dev/nvme0n2 00:21:01.404 [job2] 00:21:01.404 filename=/dev/nvme0n3 00:21:01.404 [job3] 00:21:01.404 filename=/dev/nvme0n4 00:21:01.404 Could not set queue depth (nvme0n1) 00:21:01.404 Could not set queue depth (nvme0n2) 00:21:01.404 Could not set queue depth (nvme0n3) 00:21:01.404 Could not set queue depth (nvme0n4) 00:21:01.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:01.664 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:01.664 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:01.664 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:01.664 fio-3.35 00:21:01.664 Starting 4 threads 00:21:03.044 00:21:03.044 job0: (groupid=0, jobs=1): err= 0: pid=1476401: Tue Apr 23 21:21:57 2024 00:21:03.044 read: IOPS=4919, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1003msec) 00:21:03.044 slat (nsec): min=896, max=29082k, avg=101338.19, stdev=931180.31 00:21:03.044 clat (usec): min=793, max=56240, avg=13758.40, stdev=8302.40 00:21:03.044 lat (usec): min=799, max=74940, avg=13859.73, stdev=8371.59 00:21:03.044 clat percentiles (usec): 00:21:03.044 | 1.00th=[ 2008], 5.00th=[ 4080], 10.00th=[ 8094], 20.00th=[10290], 00:21:03.044 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[12387], 00:21:03.044 | 70.00th=[13698], 80.00th=[16319], 90.00th=[22152], 95.00th=[26346], 00:21:03.044 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:21:03.044 | 99.99th=[56361] 00:21:03.044 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:21:03.044 slat (nsec): min=1478, max=10089k, avg=63328.39, stdev=483433.46 00:21:03.044 clat (usec): min=467, max=52027, avg=11608.17, stdev=7931.23 00:21:03.044 lat (usec): min=472, max=52031, avg=11671.50, stdev=7950.75 00:21:03.044 clat percentiles (usec): 00:21:03.044 | 1.00th=[ 1483], 5.00th=[ 2343], 10.00th=[ 4490], 20.00th=[ 7177], 00:21:03.044 | 30.00th=[ 7898], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11207], 00:21:03.044 | 70.00th=[11994], 80.00th=[13566], 90.00th=[18220], 95.00th=[26084], 00:21:03.044 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:21:03.044 | 99.99th=[52167] 00:21:03.044 bw ( KiB/s): min=19384, max=21576, per=28.97%, avg=20480.00, stdev=1549.98, samples=2 00:21:03.044 iops : min= 4846, max= 5394, avg=5120.00, stdev=387.49, samples=2 00:21:03.044 lat (usec) : 500=0.01%, 1000=0.09% 00:21:03.044 lat (msec) : 2=2.02%, 4=5.05%, 10=24.30%, 20=58.71%, 50=8.82% 00:21:03.044 lat (msec) : 100=0.99% 00:21:03.044 cpu : usr=1.70%, sys=4.29%, ctx=572, majf=0, minf=1 00:21:03.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:03.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.044 issued rwts: total=4934,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.044 job1: (groupid=0, jobs=1): err= 0: pid=1476402: Tue Apr 23 21:21:57 2024 00:21:03.044 read: IOPS=3897, BW=15.2MiB/s (16.0MB/s)(15.9MiB/1043msec) 00:21:03.044 slat (nsec): min=858, max=10662k, avg=109996.59, stdev=692580.34 00:21:03.044 clat (usec): min=5676, max=55513, avg=15632.36, stdev=9315.86 00:21:03.044 lat (usec): min=5679, max=59510, avg=15742.36, stdev=9348.64 00:21:03.044 clat percentiles (usec): 00:21:03.044 | 1.00th=[ 5800], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10159], 00:21:03.044 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12256], 60.00th=[13566], 00:21:03.044 | 70.00th=[15664], 80.00th=[19006], 90.00th=[29230], 95.00th=[36963], 00:21:03.044 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:21:03.044 | 99.99th=[55313] 00:21:03.044 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:21:03.044 slat (nsec): min=1452, max=14162k, avg=124339.70, stdev=733501.16 00:21:03.044 clat (usec): min=2855, max=79882, avg=16802.10, stdev=12674.79 00:21:03.044 lat (usec): min=2863, max=79885, avg=16926.44, stdev=12750.77 00:21:03.044 clat percentiles (usec): 00:21:03.044 | 1.00th=[ 5407], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[10552], 00:21:03.044 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12649], 60.00th=[14222], 00:21:03.044 | 70.00th=[15926], 80.00th=[18744], 90.00th=[23200], 95.00th=[49021], 00:21:03.044 | 99.00th=[76022], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:21:03.044 | 99.99th=[80217] 00:21:03.044 bw ( KiB/s): min=16384, max=16384, per=23.18%, avg=16384.00, stdev= 0.00, samples=2 00:21:03.044 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:21:03.045 lat (msec) : 4=0.22%, 10=14.61%, 20=68.68%, 50=13.31%, 100=3.19% 00:21:03.045 cpu : usr=1.25%, sys=2.98%, ctx=575, majf=0, minf=1 00:21:03.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:03.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.045 issued rwts: total=4065,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.045 job2: (groupid=0, jobs=1): err= 0: pid=1476403: Tue Apr 23 21:21:57 2024 00:21:03.045 read: IOPS=4443, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1005msec) 00:21:03.045 slat (nsec): min=885, max=13657k, avg=116644.88, stdev=859389.70 00:21:03.045 clat (usec): min=982, max=32537, avg=14021.52, stdev=3617.40 00:21:03.045 lat (usec): min=4720, max=32541, avg=14138.16, stdev=3682.78 00:21:03.045 clat percentiles (usec): 00:21:03.045 | 1.00th=[ 5276], 5.00th=[10028], 10.00th=[10814], 20.00th=[11338], 00:21:03.045 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:21:03.045 | 70.00th=[14746], 80.00th=[16057], 90.00th=[19006], 95.00th=[21103], 00:21:03.045 | 99.00th=[25297], 99.50th=[28181], 99.90th=[32637], 99.95th=[32637], 00:21:03.045 | 99.99th=[32637] 00:21:03.045 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:21:03.045 slat (nsec): min=1605, max=11592k, avg=100867.83, stdev=582397.18 00:21:03.045 clat (usec): min=2468, max=73904, avg=14044.98, stdev=10506.08 00:21:03.045 lat (usec): min=2476, max=73910, avg=14145.85, stdev=10573.43 00:21:03.045 clat percentiles (usec): 00:21:03.045 | 1.00th=[ 3490], 5.00th=[ 5538], 10.00th=[ 7046], 20.00th=[ 8717], 00:21:03.045 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:21:03.045 | 70.00th=[13173], 80.00th=[14746], 90.00th=[17957], 95.00th=[34866], 00:21:03.045 | 99.00th=[66323], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:21:03.045 | 99.99th=[73925] 00:21:03.045 bw ( KiB/s): min=16384, max=20480, per=26.08%, avg=18432.00, stdev=2896.31, samples=2 00:21:03.045 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:21:03.045 lat (usec) : 1000=0.01% 00:21:03.045 lat (msec) : 4=1.00%, 10=14.55%, 20=76.13%, 50=6.62%, 100=1.69% 00:21:03.045 cpu : usr=1.99%, sys=3.98%, ctx=525, majf=0, minf=1 00:21:03.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:03.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.045 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.045 job3: (groupid=0, jobs=1): err= 0: pid=1476404: Tue Apr 23 21:21:57 2024 00:21:03.045 read: IOPS=4103, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec) 00:21:03.045 slat (nsec): min=819, max=21359k, avg=124418.77, stdev=839852.74 00:21:03.045 clat (usec): min=1251, max=52468, avg=15122.78, stdev=8079.78 00:21:03.045 lat (usec): min=6239, max=52485, avg=15247.20, stdev=8124.34 00:21:03.045 clat percentiles (usec): 00:21:03.045 | 1.00th=[ 6783], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10552], 00:21:03.045 | 30.00th=[11469], 40.00th=[12256], 50.00th=[13042], 60.00th=[13829], 00:21:03.045 | 70.00th=[14615], 80.00th=[16712], 90.00th=[22152], 95.00th=[33817], 00:21:03.045 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:21:03.045 | 99.99th=[52691] 00:21:03.045 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:21:03.045 slat (nsec): min=1508, max=11595k, avg=103759.54, stdev=545014.63 00:21:03.045 clat (usec): min=1268, max=47657, avg=14104.96, stdev=6227.98 00:21:03.045 lat (usec): min=1279, max=47660, avg=14208.72, stdev=6256.54 00:21:03.045 clat percentiles (usec): 00:21:03.045 | 1.00th=[ 6587], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[11469], 00:21:03.045 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:21:03.045 | 70.00th=[13698], 80.00th=[15008], 90.00th=[19006], 95.00th=[23462], 00:21:03.045 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:21:03.045 | 99.99th=[47449] 00:21:03.045 bw ( KiB/s): min=17144, max=18920, per=25.51%, avg=18032.00, stdev=1255.82, samples=2 00:21:03.045 iops : min= 4286, max= 4730, avg=4508.00, stdev=313.96, samples=2 00:21:03.045 lat (msec) : 2=0.05%, 4=0.32%, 10=12.73%, 20=77.46%, 50=9.29% 00:21:03.045 lat (msec) : 100=0.15% 00:21:03.045 cpu : usr=1.69%, sys=2.29%, ctx=631, majf=0, minf=1 00:21:03.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:03.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.045 issued rwts: total=4124,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.045 00:21:03.045 Run status group 0 (all jobs): 00:21:03.045 READ: bw=65.9MiB/s (69.1MB/s), 15.2MiB/s-19.2MiB/s (16.0MB/s-20.1MB/s), io=68.7MiB (72.0MB), run=1003-1043msec 00:21:03.045 WRITE: bw=69.0MiB/s (72.4MB/s), 15.3MiB/s-19.9MiB/s (16.1MB/s-20.9MB/s), io=72.0MiB (75.5MB), run=1003-1043msec 00:21:03.045 00:21:03.045 Disk stats (read/write): 00:21:03.045 nvme0n1: ios=4146/4239, merge=0/0, ticks=46038/41876, in_queue=87914, util=88.08% 00:21:03.045 nvme0n2: ios=3080/3584, merge=0/0, ticks=25827/33571, in_queue=59398, util=86.80% 00:21:03.045 nvme0n3: ios=3627/4095, merge=0/0, ticks=48862/57481, in_queue=106343, util=96.79% 00:21:03.045 nvme0n4: ios=3603/3846, merge=0/0, ticks=30451/27884, in_queue=58335, util=96.87% 00:21:03.045 21:21:57 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:03.045 [global] 00:21:03.045 thread=1 00:21:03.045 invalidate=1 00:21:03.045 rw=randwrite 00:21:03.045 time_based=1 00:21:03.045 runtime=1 00:21:03.045 ioengine=libaio 00:21:03.045 direct=1 00:21:03.045 bs=4096 00:21:03.045 iodepth=128 00:21:03.045 norandommap=0 00:21:03.045 numjobs=1 00:21:03.045 00:21:03.045 verify_dump=1 00:21:03.045 verify_backlog=512 00:21:03.045 verify_state_save=0 00:21:03.045 do_verify=1 00:21:03.045 verify=crc32c-intel 00:21:03.045 [job0] 00:21:03.045 filename=/dev/nvme0n1 00:21:03.045 [job1] 00:21:03.045 filename=/dev/nvme0n2 00:21:03.045 [job2] 00:21:03.045 filename=/dev/nvme0n3 00:21:03.045 [job3] 00:21:03.045 filename=/dev/nvme0n4 00:21:03.045 Could not set queue depth (nvme0n1) 00:21:03.045 Could not set queue depth (nvme0n2) 00:21:03.045 Could not set queue depth (nvme0n3) 00:21:03.045 Could not set queue depth (nvme0n4) 00:21:03.305 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.305 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.305 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.305 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.305 fio-3.35 00:21:03.305 Starting 4 threads 00:21:04.698 00:21:04.698 job0: (groupid=0, jobs=1): err= 0: pid=1476866: Tue Apr 23 21:21:58 2024 00:21:04.698 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:21:04.698 slat (nsec): min=921, max=9880.0k, avg=107464.47, stdev=758964.54 00:21:04.698 clat (usec): min=2888, max=31798, avg=13243.59, stdev=3536.78 00:21:04.698 lat (usec): min=5583, max=31803, avg=13351.05, stdev=3582.74 00:21:04.698 clat percentiles (usec): 00:21:04.698 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11076], 00:21:04.698 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[12649], 00:21:04.698 | 70.00th=[13698], 80.00th=[15401], 90.00th=[17695], 95.00th=[20317], 00:21:04.698 | 99.00th=[26870], 99.50th=[29230], 99.90th=[31851], 99.95th=[31851], 00:21:04.698 | 99.99th=[31851] 00:21:04.698 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:21:04.698 slat (nsec): min=1656, max=33687k, avg=112847.21, stdev=868496.71 00:21:04.698 clat (usec): min=1268, max=46899, avg=15522.88, stdev=9286.45 00:21:04.698 lat (usec): min=1278, max=46909, avg=15635.72, stdev=9338.13 00:21:04.698 clat percentiles (usec): 00:21:04.698 | 1.00th=[ 4359], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 8717], 00:21:04.698 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11469], 60.00th=[13829], 00:21:04.699 | 70.00th=[16581], 80.00th=[23987], 90.00th=[29230], 95.00th=[34341], 00:21:04.699 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:21:04.699 | 99.99th=[46924] 00:21:04.699 bw ( KiB/s): min=18376, max=18480, per=25.22%, avg=18428.00, stdev=73.54, samples=2 00:21:04.699 iops : min= 4594, max= 4620, avg=4607.00, stdev=18.38, samples=2 00:21:04.699 lat (msec) : 2=0.10%, 4=0.18%, 10=21.63%, 20=62.19%, 50=15.90% 00:21:04.699 cpu : usr=2.49%, sys=2.99%, ctx=347, majf=0, minf=1 00:21:04.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.699 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.699 job1: (groupid=0, jobs=1): err= 0: pid=1476868: Tue Apr 23 21:21:58 2024 00:21:04.699 read: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1002msec) 00:21:04.699 slat (nsec): min=936, max=10225k, avg=91621.64, stdev=654712.65 00:21:04.699 clat (usec): min=1391, max=20647, avg=11012.54, stdev=2615.23 00:21:04.699 lat (usec): min=3520, max=20651, avg=11104.16, stdev=2657.05 00:21:04.699 clat percentiles (usec): 00:21:04.699 | 1.00th=[ 3720], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[ 9372], 00:21:04.699 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:21:04.699 | 70.00th=[11207], 80.00th=[12649], 90.00th=[15270], 95.00th=[16712], 00:21:04.699 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:21:04.699 | 99.99th=[20579] 00:21:04.699 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:21:04.699 slat (nsec): min=1569, max=23813k, avg=70462.12, stdev=437607.72 00:21:04.699 clat (usec): min=2200, max=35154, avg=9911.79, stdev=3256.76 00:21:04.699 lat (usec): min=2204, max=35163, avg=9982.26, stdev=3277.95 00:21:04.699 clat percentiles (usec): 00:21:04.699 | 1.00th=[ 2769], 5.00th=[ 4948], 10.00th=[ 6063], 20.00th=[ 8455], 00:21:04.699 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:21:04.699 | 70.00th=[10683], 80.00th=[10683], 90.00th=[10945], 95.00th=[13042], 00:21:04.699 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:21:04.699 | 99.99th=[35390] 00:21:04.699 bw ( KiB/s): min=24576, max=24576, per=33.63%, avg=24576.00, stdev= 0.00, samples=2 00:21:04.699 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:21:04.699 lat (msec) : 2=0.01%, 4=2.19%, 10=34.42%, 20=62.34%, 50=1.04% 00:21:04.699 cpu : usr=2.00%, sys=3.90%, ctx=799, majf=0, minf=1 00:21:04.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.699 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.699 job2: (groupid=0, jobs=1): err= 0: pid=1476874: Tue Apr 23 21:21:58 2024 00:21:04.699 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:21:04.699 slat (nsec): min=1275, max=31109k, avg=269861.09, stdev=1901221.13 00:21:04.699 clat (usec): min=8299, max=96436, avg=30730.07, stdev=19906.56 00:21:04.699 lat (usec): min=8305, max=96469, avg=30999.93, stdev=20099.11 00:21:04.699 clat percentiles (usec): 00:21:04.699 | 1.00th=[ 9896], 5.00th=[14091], 10.00th=[16188], 20.00th=[17171], 00:21:04.699 | 30.00th=[17695], 40.00th=[18220], 50.00th=[19006], 60.00th=[21627], 00:21:04.699 | 70.00th=[38011], 80.00th=[53216], 90.00th=[65274], 95.00th=[68682], 00:21:04.699 | 99.00th=[79168], 99.50th=[79168], 99.90th=[92799], 99.95th=[95945], 00:21:04.699 | 99.99th=[95945] 00:21:04.699 write: IOPS=1968, BW=7873KiB/s (8062kB/s)(7912KiB/1005msec); 0 zone resets 00:21:04.699 slat (nsec): min=1690, max=10877k, avg=293199.76, stdev=1207367.14 00:21:04.699 clat (msec): min=2, max=104, avg=40.37, stdev=26.30 00:21:04.699 lat (msec): min=5, max=104, avg=40.66, stdev=26.43 00:21:04.699 clat percentiles (msec): 00:21:04.699 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 19], 00:21:04.699 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 32], 60.00th=[ 43], 00:21:04.699 | 70.00th=[ 55], 80.00th=[ 63], 90.00th=[ 82], 95.00th=[ 96], 00:21:04.699 | 99.00th=[ 105], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 105], 00:21:04.699 | 99.99th=[ 105] 00:21:04.699 bw ( KiB/s): min= 6608, max= 8192, per=10.13%, avg=7400.00, stdev=1120.06, samples=2 00:21:04.699 iops : min= 1652, max= 2048, avg=1850.00, stdev=280.01, samples=2 00:21:04.699 lat (msec) : 4=0.03%, 10=1.79%, 20=34.95%, 50=34.32%, 100=26.64% 00:21:04.699 lat (msec) : 250=2.28% 00:21:04.699 cpu : usr=1.00%, sys=1.69%, ctx=273, majf=0, minf=1 00:21:04.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:21:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.699 issued rwts: total=1536,1978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.699 job3: (groupid=0, jobs=1): err= 0: pid=1476875: Tue Apr 23 21:21:58 2024 00:21:04.699 read: IOPS=5498, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1004msec) 00:21:04.699 slat (nsec): min=951, max=11263k, avg=99661.18, stdev=726249.27 00:21:04.699 clat (usec): min=1406, max=22984, avg=12227.16, stdev=2560.64 00:21:04.699 lat (usec): min=3615, max=23020, avg=12326.82, stdev=2616.35 00:21:04.699 clat percentiles (usec): 00:21:04.699 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10814], 00:21:04.699 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:21:04.699 | 70.00th=[12256], 80.00th=[14091], 90.00th=[15926], 95.00th=[17957], 00:21:04.699 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:21:04.699 | 99.99th=[22938] 00:21:04.699 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:21:04.699 slat (nsec): min=1583, max=9267.6k, avg=76948.80, stdev=452768.14 00:21:04.699 clat (usec): min=2315, max=21355, avg=10625.67, stdev=2697.57 00:21:04.699 lat (usec): min=2324, max=21358, avg=10702.62, stdev=2709.68 00:21:04.699 clat percentiles (usec): 00:21:04.699 | 1.00th=[ 3425], 5.00th=[ 5997], 10.00th=[ 6849], 20.00th=[ 8455], 00:21:04.699 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11076], 60.00th=[11469], 00:21:04.699 | 70.00th=[11731], 80.00th=[12256], 90.00th=[14222], 95.00th=[15139], 00:21:04.699 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20841], 99.95th=[20841], 00:21:04.699 | 99.99th=[21365] 00:21:04.699 bw ( KiB/s): min=21648, max=23408, per=30.83%, avg=22528.00, stdev=1244.51, samples=2 00:21:04.699 iops : min= 5412, max= 5852, avg=5632.00, stdev=311.13, samples=2 00:21:04.699 lat (msec) : 2=0.01%, 4=0.83%, 10=21.10%, 20=77.31%, 50=0.74% 00:21:04.699 cpu : usr=1.60%, sys=4.89%, ctx=584, majf=0, minf=1 00:21:04.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:04.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.699 issued rwts: total=5520,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.699 00:21:04.699 Run status group 0 (all jobs): 00:21:04.699 READ: bw=67.2MiB/s (70.5MB/s), 6113KiB/s-23.4MiB/s (6260kB/s-24.6MB/s), io=67.5MiB (70.8MB), run=1002-1005msec 00:21:04.699 WRITE: bw=71.4MiB/s (74.8MB/s), 7873KiB/s-24.0MiB/s (8062kB/s-25.1MB/s), io=71.7MiB (75.2MB), run=1002-1005msec 00:21:04.699 00:21:04.699 Disk stats (read/write): 00:21:04.699 nvme0n1: ios=3605/3931, merge=0/0, ticks=47540/58184, in_queue=105724, util=86.87% 00:21:04.699 nvme0n2: ios=5163/5327, merge=0/0, ticks=55022/52066, in_queue=107088, util=91.59% 00:21:04.699 nvme0n3: ios=1581/1536, merge=0/0, ticks=17069/19309, in_queue=36378, util=94.43% 00:21:04.699 nvme0n4: ios=4625/4919, merge=0/0, ticks=55952/51721, in_queue=107673, util=94.39% 00:21:04.699 21:21:58 -- target/fio.sh@55 -- # sync 00:21:04.699 21:21:58 -- target/fio.sh@59 -- # fio_pid=1477164 00:21:04.699 21:21:58 -- target/fio.sh@61 -- # sleep 3 00:21:04.699 21:21:58 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:04.699 [global] 00:21:04.699 thread=1 00:21:04.699 invalidate=1 00:21:04.699 rw=read 00:21:04.699 time_based=1 00:21:04.699 runtime=10 00:21:04.699 ioengine=libaio 00:21:04.699 direct=1 00:21:04.699 bs=4096 00:21:04.699 iodepth=1 00:21:04.699 norandommap=1 00:21:04.699 numjobs=1 00:21:04.699 00:21:04.699 [job0] 00:21:04.699 filename=/dev/nvme0n1 00:21:04.699 [job1] 00:21:04.699 filename=/dev/nvme0n2 00:21:04.699 [job2] 00:21:04.699 filename=/dev/nvme0n3 00:21:04.699 [job3] 00:21:04.699 filename=/dev/nvme0n4 00:21:04.699 Could not set queue depth (nvme0n1) 00:21:04.699 Could not set queue depth (nvme0n2) 00:21:04.699 Could not set queue depth (nvme0n3) 00:21:04.699 Could not set queue depth (nvme0n4) 00:21:04.966 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:04.966 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:04.966 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:04.966 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:04.966 fio-3.35 00:21:04.966 Starting 4 threads 00:21:07.502 21:22:01 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:07.762 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33144832, buflen=4096 00:21:07.762 fio: pid=1477344, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:07.762 21:22:01 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:07.762 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=13737984, buflen=4096 00:21:07.762 fio: pid=1477343, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:07.762 21:22:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:07.762 21:22:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:08.021 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17235968, buflen=4096 00:21:08.021 fio: pid=1477339, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.021 21:22:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.021 21:22:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:08.280 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18305024, buflen=4096 00:21:08.280 fio: pid=1477342, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.280 21:22:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.280 21:22:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:08.280 00:21:08.280 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1477339: Tue Apr 23 21:22:02 2024 00:21:08.280 read: IOPS=1460, BW=5842KiB/s (5983kB/s)(16.4MiB/2881msec) 00:21:08.280 slat (usec): min=2, max=21838, avg=14.76, stdev=367.87 00:21:08.280 clat (usec): min=224, max=41588, avg=665.44, stdev=3763.61 00:21:08.280 lat (usec): min=231, max=41594, avg=675.01, stdev=3767.22 00:21:08.280 clat percentiles (usec): 00:21:08.280 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 269], 00:21:08.280 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:21:08.280 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 424], 95.00th=[ 486], 00:21:08.280 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:08.280 | 99.99th=[41681] 00:21:08.280 bw ( KiB/s): min= 4584, max= 7344, per=22.90%, avg=6028.80, stdev=993.52, samples=5 00:21:08.280 iops : min= 1146, max= 1836, avg=1507.20, stdev=248.38, samples=5 00:21:08.280 lat (usec) : 250=4.73%, 500=91.30%, 750=3.02%, 1000=0.05% 00:21:08.280 lat (msec) : 2=0.02%, 50=0.86% 00:21:08.280 cpu : usr=0.56%, sys=1.88%, ctx=4213, majf=0, minf=1 00:21:08.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 issued rwts: total=4209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.280 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1477342: Tue Apr 23 21:22:02 2024 00:21:08.280 read: IOPS=1461, BW=5846KiB/s (5986kB/s)(17.5MiB/3058msec) 00:21:08.280 slat (usec): min=3, max=14432, avg=13.50, stdev=277.12 00:21:08.280 clat (usec): min=217, max=42523, avg=669.17, stdev=3753.70 00:21:08.280 lat (usec): min=224, max=56955, avg=682.66, stdev=3800.45 00:21:08.280 clat percentiles (usec): 00:21:08.280 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 265], 00:21:08.280 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:21:08.280 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 445], 95.00th=[ 490], 00:21:08.280 | 99.00th=[ 709], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:21:08.280 | 99.99th=[42730] 00:21:08.280 bw ( KiB/s): min= 5312, max= 7704, per=24.71%, avg=6503.00, stdev=1048.81, samples=5 00:21:08.280 iops : min= 1328, max= 1926, avg=1625.60, stdev=262.30, samples=5 00:21:08.280 lat (usec) : 250=10.27%, 500=85.57%, 750=3.15%, 1000=0.11% 00:21:08.280 lat (msec) : 2=0.02%, 50=0.85% 00:21:08.280 cpu : usr=0.72%, sys=1.86%, ctx=4472, majf=0, minf=1 00:21:08.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.280 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1477343: Tue Apr 23 21:22:02 2024 00:21:08.280 read: IOPS=1228, BW=4912KiB/s (5030kB/s)(13.1MiB/2731msec) 00:21:08.280 slat (nsec): min=3529, max=57721, avg=7811.96, stdev=4080.82 00:21:08.280 clat (usec): min=225, max=42099, avg=804.96, stdev=4402.79 00:21:08.280 lat (usec): min=231, max=42131, avg=812.76, stdev=4404.97 00:21:08.280 clat percentiles (usec): 00:21:08.280 | 1.00th=[ 249], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 297], 00:21:08.280 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:21:08.280 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 424], 00:21:08.280 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:08.280 | 99.99th=[42206] 00:21:08.280 bw ( KiB/s): min= 96, max=11848, per=17.15%, avg=4515.20, stdev=4931.01, samples=5 00:21:08.280 iops : min= 24, max= 2962, avg=1128.80, stdev=1232.75, samples=5 00:21:08.280 lat (usec) : 250=1.19%, 500=96.66%, 750=0.77%, 1000=0.15% 00:21:08.280 lat (msec) : 2=0.06%, 50=1.13% 00:21:08.280 cpu : usr=0.37%, sys=1.17%, ctx=3358, majf=0, minf=1 00:21:08.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 issued rwts: total=3355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.280 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1477344: Tue Apr 23 21:22:02 2024 00:21:08.280 read: IOPS=3136, BW=12.3MiB/s (12.8MB/s)(31.6MiB/2580msec) 00:21:08.280 slat (nsec): min=2488, max=41655, avg=6835.01, stdev=2078.64 00:21:08.280 clat (usec): min=211, max=4426, avg=310.87, stdev=83.64 00:21:08.280 lat (usec): min=218, max=4434, avg=317.71, stdev=83.85 00:21:08.280 clat percentiles (usec): 00:21:08.280 | 1.00th=[ 233], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 277], 00:21:08.280 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:21:08.280 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 429], 00:21:08.280 | 99.00th=[ 498], 99.50th=[ 515], 99.90th=[ 750], 99.95th=[ 1745], 00:21:08.280 | 99.99th=[ 4424] 00:21:08.280 bw ( KiB/s): min=11728, max=13456, per=47.93%, avg=12616.00, stdev=717.69, samples=5 00:21:08.280 iops : min= 2932, max= 3364, avg=3154.00, stdev=179.42, samples=5 00:21:08.280 lat (usec) : 250=4.41%, 500=94.72%, 750=0.75%, 1000=0.01% 00:21:08.280 lat (msec) : 2=0.04%, 4=0.04%, 10=0.01% 00:21:08.280 cpu : usr=0.39%, sys=3.10%, ctx=8093, majf=0, minf=2 00:21:08.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.280 issued rwts: total=8093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.280 00:21:08.280 Run status group 0 (all jobs): 00:21:08.280 READ: bw=25.7MiB/s (27.0MB/s), 4912KiB/s-12.3MiB/s (5030kB/s-12.8MB/s), io=78.6MiB (82.4MB), run=2580-3058msec 00:21:08.280 00:21:08.280 Disk stats (read/write): 00:21:08.280 nvme0n1: ios=4195/0, merge=0/0, ticks=2730/0, in_queue=2730, util=94.52% 00:21:08.280 nvme0n2: ios=4500/0, merge=0/0, ticks=2926/0, in_queue=2926, util=99.63% 00:21:08.280 nvme0n3: ios=3090/0, merge=0/0, ticks=3461/0, in_queue=3461, util=99.74% 00:21:08.280 nvme0n4: ios=7395/0, merge=0/0, ticks=2253/0, in_queue=2253, util=96.02% 00:21:08.280 21:22:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.280 21:22:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:08.539 21:22:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.539 21:22:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:08.797 21:22:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.797 21:22:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:08.797 21:22:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.797 21:22:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:09.054 21:22:03 -- target/fio.sh@69 -- # fio_status=0 00:21:09.054 21:22:03 -- target/fio.sh@70 -- # wait 1477164 00:21:09.054 21:22:03 -- target/fio.sh@70 -- # fio_status=4 00:21:09.054 21:22:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:09.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:09.314 21:22:03 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:09.314 21:22:03 -- common/autotest_common.sh@1205 -- # local i=0 00:21:09.314 21:22:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:09.314 21:22:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:09.314 21:22:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:09.314 21:22:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:09.314 21:22:03 -- common/autotest_common.sh@1217 -- # return 0 00:21:09.314 21:22:03 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:09.314 21:22:03 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:09.314 nvmf hotplug test: fio failed as expected 00:21:09.314 21:22:03 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.575 21:22:03 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:09.575 21:22:03 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:09.575 21:22:03 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:09.575 21:22:03 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:09.575 21:22:03 -- target/fio.sh@91 -- # nvmftestfini 00:21:09.575 21:22:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:09.575 21:22:03 -- nvmf/common.sh@117 -- # sync 00:21:09.575 21:22:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.575 21:22:03 -- nvmf/common.sh@120 -- # set +e 00:21:09.575 21:22:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.575 21:22:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.575 rmmod nvme_tcp 00:21:09.575 rmmod nvme_fabrics 00:21:09.575 rmmod nvme_keyring 00:21:09.575 21:22:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.575 21:22:03 -- nvmf/common.sh@124 -- # set -e 00:21:09.575 21:22:03 -- nvmf/common.sh@125 -- # return 0 00:21:09.575 21:22:03 -- nvmf/common.sh@478 -- # '[' -n 1473757 ']' 00:21:09.575 21:22:03 -- nvmf/common.sh@479 -- # killprocess 1473757 00:21:09.575 21:22:03 -- common/autotest_common.sh@936 -- # '[' -z 1473757 ']' 00:21:09.575 21:22:03 -- common/autotest_common.sh@940 -- # kill -0 1473757 00:21:09.575 21:22:03 -- common/autotest_common.sh@941 -- # uname 00:21:09.575 21:22:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.575 21:22:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1473757 00:21:09.839 21:22:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:09.839 21:22:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:09.839 21:22:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1473757' 00:21:09.839 killing process with pid 1473757 00:21:09.839 21:22:03 -- common/autotest_common.sh@955 -- # kill 1473757 00:21:09.839 21:22:03 -- common/autotest_common.sh@960 -- # wait 1473757 00:21:10.098 21:22:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:10.098 21:22:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:10.098 21:22:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:10.098 21:22:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.098 21:22:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.098 21:22:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.098 21:22:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.098 21:22:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.634 21:22:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.634 00:21:12.634 real 0m27.289s 00:21:12.634 user 2m31.688s 00:21:12.634 sys 0m7.802s 00:21:12.634 21:22:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:12.634 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:12.634 ************************************ 00:21:12.634 END TEST nvmf_fio_target 00:21:12.634 ************************************ 00:21:12.635 21:22:06 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:12.635 21:22:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:12.635 21:22:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:12.635 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:12.635 ************************************ 00:21:12.635 START TEST nvmf_bdevio 00:21:12.635 ************************************ 00:21:12.635 21:22:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:12.635 * Looking for test storage... 00:21:12.635 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:12.635 21:22:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.635 21:22:06 -- nvmf/common.sh@7 -- # uname -s 00:21:12.635 21:22:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.635 21:22:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.635 21:22:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.635 21:22:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.635 21:22:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.635 21:22:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.635 21:22:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.635 21:22:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.635 21:22:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.635 21:22:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.635 21:22:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:12.635 21:22:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:12.635 21:22:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.635 21:22:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.635 21:22:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:12.635 21:22:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.635 21:22:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:12.635 21:22:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.635 21:22:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.635 21:22:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.635 21:22:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.635 21:22:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.635 21:22:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.635 21:22:06 -- paths/export.sh@5 -- # export PATH 00:21:12.635 21:22:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.635 21:22:06 -- nvmf/common.sh@47 -- # : 0 00:21:12.635 21:22:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.635 21:22:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.635 21:22:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.635 21:22:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.635 21:22:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.635 21:22:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.635 21:22:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.635 21:22:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.635 21:22:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.635 21:22:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.635 21:22:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:12.635 21:22:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:12.635 21:22:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.635 21:22:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:12.635 21:22:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:12.635 21:22:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:12.635 21:22:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.635 21:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.635 21:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.635 21:22:06 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:12.635 21:22:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:12.635 21:22:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.635 21:22:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.911 21:22:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:17.911 21:22:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.911 21:22:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.912 21:22:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.912 21:22:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.912 21:22:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.912 21:22:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.912 21:22:11 -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.912 21:22:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.912 21:22:11 -- nvmf/common.sh@296 -- # e810=() 00:21:17.912 21:22:11 -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.912 21:22:11 -- nvmf/common.sh@297 -- # x722=() 00:21:17.912 21:22:11 -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.912 21:22:11 -- nvmf/common.sh@298 -- # mlx=() 00:21:17.912 21:22:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.912 21:22:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.912 21:22:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.912 21:22:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.912 21:22:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:17.912 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:17.912 21:22:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.912 21:22:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:17.912 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:17.912 21:22:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.912 21:22:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.912 21:22:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.912 21:22:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:17.912 Found net devices under 0000:27:00.0: cvl_0_0 00:21:17.912 21:22:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.912 21:22:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.912 21:22:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.912 21:22:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.912 21:22:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:17.912 Found net devices under 0000:27:00.1: cvl_0_1 00:21:17.912 21:22:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.912 21:22:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:17.912 21:22:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:17.912 21:22:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.912 21:22:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.912 21:22:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.912 21:22:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.912 21:22:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.912 21:22:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.912 21:22:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.912 21:22:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.912 21:22:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.912 21:22:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.912 21:22:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.912 21:22:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.912 21:22:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.912 21:22:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.912 21:22:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.912 21:22:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.912 21:22:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.912 21:22:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.912 21:22:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.912 21:22:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:21:17.912 00:21:17.912 --- 10.0.0.2 ping statistics --- 00:21:17.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.912 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:21:17.912 21:22:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:21:17.912 00:21:17.912 --- 10.0.0.1 ping statistics --- 00:21:17.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.912 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:21:17.912 21:22:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.912 21:22:11 -- nvmf/common.sh@411 -- # return 0 00:21:17.912 21:22:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:17.912 21:22:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.912 21:22:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:17.912 21:22:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.912 21:22:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:17.912 21:22:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:17.912 21:22:11 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:17.912 21:22:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:17.912 21:22:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:17.912 21:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:17.912 21:22:11 -- nvmf/common.sh@470 -- # nvmfpid=1482159 00:21:17.912 21:22:11 -- nvmf/common.sh@471 -- # waitforlisten 1482159 00:21:17.912 21:22:11 -- common/autotest_common.sh@817 -- # '[' -z 1482159 ']' 00:21:17.912 21:22:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.912 21:22:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.912 21:22:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.912 21:22:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.912 21:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:17.912 21:22:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:17.912 [2024-04-23 21:22:11.695983] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:17.912 [2024-04-23 21:22:11.696114] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.912 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.912 [2024-04-23 21:22:11.837167] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.912 [2024-04-23 21:22:11.932405] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.912 [2024-04-23 21:22:11.932448] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.912 [2024-04-23 21:22:11.932460] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.912 [2024-04-23 21:22:11.932469] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.912 [2024-04-23 21:22:11.932477] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.912 [2024-04-23 21:22:11.932701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.912 [2024-04-23 21:22:11.932841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:17.912 [2024-04-23 21:22:11.932942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.912 [2024-04-23 21:22:11.932972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:18.171 21:22:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.171 21:22:12 -- common/autotest_common.sh@850 -- # return 0 00:21:18.171 21:22:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:18.171 21:22:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.171 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.171 21:22:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.171 21:22:12 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.171 21:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.171 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.171 [2024-04-23 21:22:12.423569] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.171 21:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.171 21:22:12 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:18.171 21:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.171 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.430 Malloc0 00:21:18.430 21:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.430 21:22:12 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.430 21:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.430 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.430 21:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.430 21:22:12 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:18.431 21:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.431 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.431 21:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.431 21:22:12 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.431 21:22:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.431 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:18.431 [2024-04-23 21:22:12.489857] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.431 21:22:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.431 21:22:12 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:18.431 21:22:12 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:18.431 21:22:12 -- nvmf/common.sh@521 -- # config=() 00:21:18.431 21:22:12 -- nvmf/common.sh@521 -- # local subsystem config 00:21:18.431 21:22:12 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:18.431 21:22:12 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:18.431 { 00:21:18.431 "params": { 00:21:18.431 "name": "Nvme$subsystem", 00:21:18.431 "trtype": "$TEST_TRANSPORT", 00:21:18.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:18.431 "adrfam": "ipv4", 00:21:18.431 "trsvcid": "$NVMF_PORT", 00:21:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:18.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:18.431 "hdgst": ${hdgst:-false}, 00:21:18.431 "ddgst": ${ddgst:-false} 00:21:18.431 }, 00:21:18.431 "method": "bdev_nvme_attach_controller" 00:21:18.431 } 00:21:18.431 EOF 00:21:18.431 )") 00:21:18.431 21:22:12 -- nvmf/common.sh@543 -- # cat 00:21:18.431 21:22:12 -- nvmf/common.sh@545 -- # jq . 00:21:18.431 21:22:12 -- nvmf/common.sh@546 -- # IFS=, 00:21:18.431 21:22:12 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:18.431 "params": { 00:21:18.431 "name": "Nvme1", 00:21:18.431 "trtype": "tcp", 00:21:18.431 "traddr": "10.0.0.2", 00:21:18.431 "adrfam": "ipv4", 00:21:18.431 "trsvcid": "4420", 00:21:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.431 "hdgst": false, 00:21:18.431 "ddgst": false 00:21:18.431 }, 00:21:18.431 "method": "bdev_nvme_attach_controller" 00:21:18.431 }' 00:21:18.431 [2024-04-23 21:22:12.562253] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:18.431 [2024-04-23 21:22:12.562357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482440 ] 00:21:18.431 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.431 [2024-04-23 21:22:12.677381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:18.690 [2024-04-23 21:22:12.768291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.690 [2024-04-23 21:22:12.768394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.690 [2024-04-23 21:22:12.768399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.950 I/O targets: 00:21:18.950 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:18.950 00:21:18.950 00:21:18.950 CUnit - A unit testing framework for C - Version 2.1-3 00:21:18.950 http://cunit.sourceforge.net/ 00:21:18.950 00:21:18.950 00:21:18.950 Suite: bdevio tests on: Nvme1n1 00:21:18.950 Test: blockdev write read block ...passed 00:21:18.950 Test: blockdev write zeroes read block ...passed 00:21:18.950 Test: blockdev write zeroes read no split ...passed 00:21:18.950 Test: blockdev write zeroes read split ...passed 00:21:19.210 Test: blockdev write zeroes read split partial ...passed 00:21:19.210 Test: blockdev reset ...[2024-04-23 21:22:13.225391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.210 [2024-04-23 21:22:13.225492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:21:19.210 [2024-04-23 21:22:13.361061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:19.210 passed 00:21:19.210 Test: blockdev write read 8 blocks ...passed 00:21:19.210 Test: blockdev write read size > 128k ...passed 00:21:19.210 Test: blockdev write read invalid size ...passed 00:21:19.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:19.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:19.210 Test: blockdev write read max offset ...passed 00:21:19.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:19.471 Test: blockdev writev readv 8 blocks ...passed 00:21:19.471 Test: blockdev writev readv 30 x 1block ...passed 00:21:19.471 Test: blockdev writev readv block ...passed 00:21:19.471 Test: blockdev writev readv size > 128k ...passed 00:21:19.471 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:19.471 Test: blockdev comparev and writev ...[2024-04-23 21:22:13.616149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.616190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.616209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.616219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.616623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.616639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.616653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.616661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.617015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.617025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.617038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.617046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.617414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.617423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.617436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:19.471 [2024-04-23 21:22:13.617444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:19.471 passed 00:21:19.471 Test: blockdev nvme passthru rw ...passed 00:21:19.471 Test: blockdev nvme passthru vendor specific ...[2024-04-23 21:22:13.699084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.471 [2024-04-23 21:22:13.699108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.699280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.471 [2024-04-23 21:22:13.699289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.699454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.471 [2024-04-23 21:22:13.699462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:19.471 [2024-04-23 21:22:13.699624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.471 [2024-04-23 21:22:13.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.471 passed 00:21:19.471 Test: blockdev nvme admin passthru ...passed 00:21:19.729 Test: blockdev copy ...passed 00:21:19.729 00:21:19.729 Run Summary: Type Total Ran Passed Failed Inactive 00:21:19.729 suites 1 1 n/a 0 0 00:21:19.729 tests 23 23 23 0 0 00:21:19.729 asserts 152 152 152 0 n/a 00:21:19.729 00:21:19.729 Elapsed time = 1.463 seconds 00:21:19.987 21:22:14 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.987 21:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.987 21:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:19.987 21:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.987 21:22:14 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:19.987 21:22:14 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:19.987 21:22:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:19.987 21:22:14 -- nvmf/common.sh@117 -- # sync 00:21:19.987 21:22:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.987 21:22:14 -- nvmf/common.sh@120 -- # set +e 00:21:19.987 21:22:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.987 21:22:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.987 rmmod nvme_tcp 00:21:19.987 rmmod nvme_fabrics 00:21:19.987 rmmod nvme_keyring 00:21:19.987 21:22:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.987 21:22:14 -- nvmf/common.sh@124 -- # set -e 00:21:19.987 21:22:14 -- nvmf/common.sh@125 -- # return 0 00:21:19.987 21:22:14 -- nvmf/common.sh@478 -- # '[' -n 1482159 ']' 00:21:19.987 21:22:14 -- nvmf/common.sh@479 -- # killprocess 1482159 00:21:19.987 21:22:14 -- common/autotest_common.sh@936 -- # '[' -z 1482159 ']' 00:21:19.987 21:22:14 -- common/autotest_common.sh@940 -- # kill -0 1482159 00:21:19.987 21:22:14 -- common/autotest_common.sh@941 -- # uname 00:21:19.987 21:22:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.987 21:22:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1482159 00:21:19.987 21:22:14 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:19.987 21:22:14 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:19.987 21:22:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1482159' 00:21:19.987 killing process with pid 1482159 00:21:19.987 21:22:14 -- common/autotest_common.sh@955 -- # kill 1482159 00:21:19.987 21:22:14 -- common/autotest_common.sh@960 -- # wait 1482159 00:21:20.554 21:22:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:20.554 21:22:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:20.554 21:22:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:20.554 21:22:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.554 21:22:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.554 21:22:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.554 21:22:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.554 21:22:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.520 21:22:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.781 00:21:22.781 real 0m10.277s 00:21:22.781 user 0m15.062s 00:21:22.781 sys 0m4.349s 00:21:22.781 21:22:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.781 21:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.781 ************************************ 00:21:22.781 END TEST nvmf_bdevio 00:21:22.781 ************************************ 00:21:22.781 21:22:16 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:21:22.781 21:22:16 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:22.781 21:22:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:22.781 21:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.781 21:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:22.781 ************************************ 00:21:22.781 START TEST nvmf_bdevio_no_huge 00:21:22.781 ************************************ 00:21:22.781 21:22:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:22.781 * Looking for test storage... 00:21:22.781 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:22.781 21:22:16 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.781 21:22:16 -- nvmf/common.sh@7 -- # uname -s 00:21:22.781 21:22:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.781 21:22:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.781 21:22:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.781 21:22:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.781 21:22:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.781 21:22:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.781 21:22:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.781 21:22:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.781 21:22:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.781 21:22:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.781 21:22:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:22.781 21:22:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:22.781 21:22:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.781 21:22:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.781 21:22:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:22.781 21:22:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.781 21:22:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:22.781 21:22:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.781 21:22:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.781 21:22:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.781 21:22:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.781 21:22:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.781 21:22:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.781 21:22:16 -- paths/export.sh@5 -- # export PATH 00:21:22.781 21:22:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.781 21:22:16 -- nvmf/common.sh@47 -- # : 0 00:21:22.781 21:22:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.781 21:22:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.781 21:22:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.781 21:22:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.781 21:22:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.781 21:22:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.781 21:22:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.781 21:22:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.781 21:22:16 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.781 21:22:16 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.781 21:22:16 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:22.781 21:22:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:22.781 21:22:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.781 21:22:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:22.781 21:22:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:22.781 21:22:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:22.781 21:22:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.781 21:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.781 21:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.781 21:22:16 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:22.781 21:22:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:22.781 21:22:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.781 21:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:28.062 21:22:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:28.062 21:22:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.062 21:22:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.062 21:22:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.062 21:22:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.062 21:22:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.062 21:22:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.062 21:22:22 -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.062 21:22:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.062 21:22:22 -- nvmf/common.sh@296 -- # e810=() 00:21:28.062 21:22:22 -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.062 21:22:22 -- nvmf/common.sh@297 -- # x722=() 00:21:28.062 21:22:22 -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.062 21:22:22 -- nvmf/common.sh@298 -- # mlx=() 00:21:28.062 21:22:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.062 21:22:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.062 21:22:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.062 21:22:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.062 21:22:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:28.062 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:28.062 21:22:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.062 21:22:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:28.062 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:28.062 21:22:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.062 21:22:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.062 21:22:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.062 21:22:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:28.062 Found net devices under 0000:27:00.0: cvl_0_0 00:21:28.062 21:22:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.062 21:22:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.062 21:22:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.062 21:22:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.062 21:22:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:28.062 Found net devices under 0000:27:00.1: cvl_0_1 00:21:28.062 21:22:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.062 21:22:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:28.062 21:22:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:28.062 21:22:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:28.062 21:22:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.062 21:22:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.062 21:22:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.062 21:22:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.062 21:22:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.062 21:22:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.062 21:22:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.062 21:22:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.062 21:22:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.062 21:22:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.062 21:22:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.062 21:22:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.062 21:22:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.062 21:22:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.062 21:22:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.062 21:22:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.062 21:22:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.062 21:22:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.062 21:22:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.320 21:22:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:21:28.320 00:21:28.320 --- 10.0.0.2 ping statistics --- 00:21:28.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.320 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:21:28.320 21:22:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:21:28.320 00:21:28.320 --- 10.0.0.1 ping statistics --- 00:21:28.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.320 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:21:28.320 21:22:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.320 21:22:22 -- nvmf/common.sh@411 -- # return 0 00:21:28.320 21:22:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:28.320 21:22:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.320 21:22:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:28.320 21:22:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:28.320 21:22:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.320 21:22:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:28.320 21:22:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:28.320 21:22:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:28.320 21:22:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:28.320 21:22:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:28.320 21:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:28.320 21:22:22 -- nvmf/common.sh@470 -- # nvmfpid=1486646 00:21:28.320 21:22:22 -- nvmf/common.sh@471 -- # waitforlisten 1486646 00:21:28.320 21:22:22 -- common/autotest_common.sh@817 -- # '[' -z 1486646 ']' 00:21:28.320 21:22:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.320 21:22:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:28.320 21:22:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.320 21:22:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:28.320 21:22:22 -- common/autotest_common.sh@10 -- # set +x 00:21:28.320 21:22:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:28.320 [2024-04-23 21:22:22.468374] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:28.320 [2024-04-23 21:22:22.468488] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:28.580 [2024-04-23 21:22:22.609550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.580 [2024-04-23 21:22:22.730862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.580 [2024-04-23 21:22:22.730899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.580 [2024-04-23 21:22:22.730909] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.580 [2024-04-23 21:22:22.730918] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.580 [2024-04-23 21:22:22.730926] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.580 [2024-04-23 21:22:22.731192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.580 [2024-04-23 21:22:22.731130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.580 [2024-04-23 21:22:22.731172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:28.580 [2024-04-23 21:22:22.731221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:29.151 21:22:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:29.151 21:22:23 -- common/autotest_common.sh@850 -- # return 0 00:21:29.151 21:22:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:29.151 21:22:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 21:22:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.151 21:22:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:29.151 21:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 [2024-04-23 21:22:23.226296] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.151 21:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.151 21:22:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:29.151 21:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 Malloc0 00:21:29.151 21:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.151 21:22:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.151 21:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 21:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.151 21:22:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:29.151 21:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 21:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.151 21:22:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.151 21:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.151 21:22:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.151 [2024-04-23 21:22:23.290467] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.151 21:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.151 21:22:23 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:29.151 21:22:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:29.151 21:22:23 -- nvmf/common.sh@521 -- # config=() 00:21:29.151 21:22:23 -- nvmf/common.sh@521 -- # local subsystem config 00:21:29.151 21:22:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:29.151 21:22:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:29.151 { 00:21:29.151 "params": { 00:21:29.151 "name": "Nvme$subsystem", 00:21:29.151 "trtype": "$TEST_TRANSPORT", 00:21:29.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.151 "adrfam": "ipv4", 00:21:29.151 "trsvcid": "$NVMF_PORT", 00:21:29.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.151 "hdgst": ${hdgst:-false}, 00:21:29.151 "ddgst": ${ddgst:-false} 00:21:29.151 }, 00:21:29.151 "method": "bdev_nvme_attach_controller" 00:21:29.151 } 00:21:29.151 EOF 00:21:29.151 )") 00:21:29.151 21:22:23 -- nvmf/common.sh@543 -- # cat 00:21:29.151 21:22:23 -- nvmf/common.sh@545 -- # jq . 00:21:29.151 21:22:23 -- nvmf/common.sh@546 -- # IFS=, 00:21:29.151 21:22:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:29.151 "params": { 00:21:29.151 "name": "Nvme1", 00:21:29.151 "trtype": "tcp", 00:21:29.151 "traddr": "10.0.0.2", 00:21:29.151 "adrfam": "ipv4", 00:21:29.151 "trsvcid": "4420", 00:21:29.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.151 "hdgst": false, 00:21:29.151 "ddgst": false 00:21:29.151 }, 00:21:29.151 "method": "bdev_nvme_attach_controller" 00:21:29.151 }' 00:21:29.151 [2024-04-23 21:22:23.376806] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:29.151 [2024-04-23 21:22:23.376947] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1486946 ] 00:21:29.413 [2024-04-23 21:22:23.529154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:29.413 [2024-04-23 21:22:23.651766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.413 [2024-04-23 21:22:23.651869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.413 [2024-04-23 21:22:23.651876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.671 I/O targets: 00:21:29.671 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:29.671 00:21:29.671 00:21:29.671 CUnit - A unit testing framework for C - Version 2.1-3 00:21:29.671 http://cunit.sourceforge.net/ 00:21:29.671 00:21:29.671 00:21:29.671 Suite: bdevio tests on: Nvme1n1 00:21:29.671 Test: blockdev write read block ...passed 00:21:29.929 Test: blockdev write zeroes read block ...passed 00:21:29.929 Test: blockdev write zeroes read no split ...passed 00:21:29.929 Test: blockdev write zeroes read split ...passed 00:21:29.929 Test: blockdev write zeroes read split partial ...passed 00:21:29.929 Test: blockdev reset ...[2024-04-23 21:22:24.010652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.929 [2024-04-23 21:22:24.010743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:21:29.929 [2024-04-23 21:22:24.027756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.929 passed 00:21:29.929 Test: blockdev write read 8 blocks ...passed 00:21:29.929 Test: blockdev write read size > 128k ...passed 00:21:29.929 Test: blockdev write read invalid size ...passed 00:21:29.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:29.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:29.929 Test: blockdev write read max offset ...passed 00:21:29.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:30.187 Test: blockdev writev readv 8 blocks ...passed 00:21:30.187 Test: blockdev writev readv 30 x 1block ...passed 00:21:30.187 Test: blockdev writev readv block ...passed 00:21:30.187 Test: blockdev writev readv size > 128k ...passed 00:21:30.187 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:30.187 Test: blockdev comparev and writev ...[2024-04-23 21:22:24.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.338721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.338756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.339263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.339273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.339286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.339297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.339781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.339791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.339804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.339812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.340298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.340308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.340320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:30.187 [2024-04-23 21:22:24.340329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:30.187 passed 00:21:30.187 Test: blockdev nvme passthru rw ...passed 00:21:30.187 Test: blockdev nvme passthru vendor specific ...[2024-04-23 21:22:24.425308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.187 [2024-04-23 21:22:24.425331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.425643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.187 [2024-04-23 21:22:24.425652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.425948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.187 [2024-04-23 21:22:24.425956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:30.187 [2024-04-23 21:22:24.426238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:30.187 [2024-04-23 21:22:24.426248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:30.187 passed 00:21:30.187 Test: blockdev nvme admin passthru ...passed 00:21:30.447 Test: blockdev copy ...passed 00:21:30.447 00:21:30.447 Run Summary: Type Total Ran Passed Failed Inactive 00:21:30.447 suites 1 1 n/a 0 0 00:21:30.447 tests 23 23 23 0 0 00:21:30.447 asserts 152 152 152 0 n/a 00:21:30.447 00:21:30.447 Elapsed time = 1.216 seconds 00:21:30.707 21:22:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.707 21:22:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:30.707 21:22:24 -- common/autotest_common.sh@10 -- # set +x 00:21:30.707 21:22:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:30.707 21:22:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:30.707 21:22:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:30.707 21:22:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:30.707 21:22:24 -- nvmf/common.sh@117 -- # sync 00:21:30.707 21:22:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.707 21:22:24 -- nvmf/common.sh@120 -- # set +e 00:21:30.707 21:22:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.707 21:22:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.707 rmmod nvme_tcp 00:21:30.707 rmmod nvme_fabrics 00:21:30.707 rmmod nvme_keyring 00:21:30.707 21:22:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.707 21:22:24 -- nvmf/common.sh@124 -- # set -e 00:21:30.707 21:22:24 -- nvmf/common.sh@125 -- # return 0 00:21:30.707 21:22:24 -- nvmf/common.sh@478 -- # '[' -n 1486646 ']' 00:21:30.707 21:22:24 -- nvmf/common.sh@479 -- # killprocess 1486646 00:21:30.707 21:22:24 -- common/autotest_common.sh@936 -- # '[' -z 1486646 ']' 00:21:30.708 21:22:24 -- common/autotest_common.sh@940 -- # kill -0 1486646 00:21:30.708 21:22:24 -- common/autotest_common.sh@941 -- # uname 00:21:30.708 21:22:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.708 21:22:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1486646 00:21:30.708 21:22:24 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:30.708 21:22:24 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:30.708 21:22:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1486646' 00:21:30.708 killing process with pid 1486646 00:21:30.708 21:22:24 -- common/autotest_common.sh@955 -- # kill 1486646 00:21:30.708 21:22:24 -- common/autotest_common.sh@960 -- # wait 1486646 00:21:31.279 21:22:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:31.279 21:22:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:31.279 21:22:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:31.279 21:22:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.279 21:22:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.279 21:22:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.279 21:22:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.279 21:22:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.185 21:22:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.185 00:21:33.185 real 0m10.520s 00:21:33.185 user 0m14.133s 00:21:33.185 sys 0m4.955s 00:21:33.185 21:22:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:33.185 21:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:33.185 ************************************ 00:21:33.185 END TEST nvmf_bdevio_no_huge 00:21:33.185 ************************************ 00:21:33.185 21:22:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:33.185 21:22:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:33.185 21:22:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.185 21:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:33.445 ************************************ 00:21:33.445 START TEST nvmf_tls 00:21:33.445 ************************************ 00:21:33.445 21:22:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:33.445 * Looking for test storage... 00:21:33.445 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:33.445 21:22:27 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.445 21:22:27 -- nvmf/common.sh@7 -- # uname -s 00:21:33.445 21:22:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.445 21:22:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.445 21:22:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.445 21:22:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.445 21:22:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.445 21:22:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.445 21:22:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.445 21:22:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.445 21:22:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.445 21:22:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.445 21:22:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:33.445 21:22:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:21:33.445 21:22:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.445 21:22:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.445 21:22:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:33.445 21:22:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.445 21:22:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:33.445 21:22:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.445 21:22:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.445 21:22:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.445 21:22:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.445 21:22:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.445 21:22:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.445 21:22:27 -- paths/export.sh@5 -- # export PATH 00:21:33.445 21:22:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.445 21:22:27 -- nvmf/common.sh@47 -- # : 0 00:21:33.445 21:22:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.445 21:22:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.445 21:22:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.445 21:22:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.445 21:22:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.445 21:22:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.445 21:22:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.445 21:22:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.445 21:22:27 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:33.445 21:22:27 -- target/tls.sh@62 -- # nvmftestinit 00:21:33.445 21:22:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:33.445 21:22:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.445 21:22:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:33.445 21:22:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:33.445 21:22:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:33.445 21:22:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.445 21:22:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.445 21:22:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.445 21:22:27 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:33.445 21:22:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:33.445 21:22:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.445 21:22:27 -- common/autotest_common.sh@10 -- # set +x 00:21:38.723 21:22:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:38.723 21:22:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.723 21:22:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.723 21:22:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.723 21:22:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.723 21:22:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.723 21:22:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.723 21:22:32 -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.723 21:22:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.723 21:22:32 -- nvmf/common.sh@296 -- # e810=() 00:21:38.723 21:22:32 -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.723 21:22:32 -- nvmf/common.sh@297 -- # x722=() 00:21:38.723 21:22:32 -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.723 21:22:32 -- nvmf/common.sh@298 -- # mlx=() 00:21:38.723 21:22:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.723 21:22:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.723 21:22:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.723 21:22:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.723 21:22:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:38.723 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:38.723 21:22:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.723 21:22:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:38.723 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:38.723 21:22:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.723 21:22:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.723 21:22:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.723 21:22:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:38.723 Found net devices under 0000:27:00.0: cvl_0_0 00:21:38.723 21:22:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.723 21:22:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.723 21:22:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.723 21:22:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.723 21:22:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:38.723 Found net devices under 0000:27:00.1: cvl_0_1 00:21:38.723 21:22:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.723 21:22:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:38.723 21:22:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:38.723 21:22:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:38.723 21:22:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.723 21:22:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.723 21:22:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.723 21:22:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.723 21:22:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.723 21:22:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.723 21:22:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.723 21:22:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.723 21:22:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.723 21:22:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.723 21:22:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.723 21:22:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.723 21:22:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.723 21:22:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.723 21:22:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.723 21:22:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.723 21:22:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.723 21:22:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.724 21:22:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.724 21:22:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:21:38.724 00:21:38.724 --- 10.0.0.2 ping statistics --- 00:21:38.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.724 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:38.724 21:22:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:21:38.724 00:21:38.724 --- 10.0.0.1 ping statistics --- 00:21:38.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.724 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:38.724 21:22:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.724 21:22:32 -- nvmf/common.sh@411 -- # return 0 00:21:38.724 21:22:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:38.724 21:22:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.724 21:22:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:38.724 21:22:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:38.724 21:22:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.724 21:22:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:38.724 21:22:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:38.724 21:22:32 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:38.724 21:22:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:38.724 21:22:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:38.724 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:38.724 21:22:32 -- nvmf/common.sh@470 -- # nvmfpid=1491151 00:21:38.724 21:22:32 -- nvmf/common.sh@471 -- # waitforlisten 1491151 00:21:38.724 21:22:32 -- common/autotest_common.sh@817 -- # '[' -z 1491151 ']' 00:21:38.724 21:22:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.724 21:22:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:38.724 21:22:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.724 21:22:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:38.724 21:22:32 -- common/autotest_common.sh@10 -- # set +x 00:21:38.724 21:22:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:38.982 [2024-04-23 21:22:33.012820] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:38.982 [2024-04-23 21:22:33.012929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.982 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.982 [2024-04-23 21:22:33.137282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.982 [2024-04-23 21:22:33.235798] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.982 [2024-04-23 21:22:33.235835] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.982 [2024-04-23 21:22:33.235845] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.982 [2024-04-23 21:22:33.235855] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.982 [2024-04-23 21:22:33.235862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.982 [2024-04-23 21:22:33.235895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.547 21:22:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:39.547 21:22:33 -- common/autotest_common.sh@850 -- # return 0 00:21:39.547 21:22:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:39.547 21:22:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:39.547 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:21:39.547 21:22:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.547 21:22:33 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:39.547 21:22:33 -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:39.806 true 00:21:39.806 21:22:33 -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:39.806 21:22:33 -- target/tls.sh@73 -- # jq -r .tls_version 00:21:39.806 21:22:34 -- target/tls.sh@73 -- # version=0 00:21:39.806 21:22:34 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:39.806 21:22:34 -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:40.065 21:22:34 -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.065 21:22:34 -- target/tls.sh@81 -- # jq -r .tls_version 00:21:40.065 21:22:34 -- target/tls.sh@81 -- # version=13 00:21:40.065 21:22:34 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:40.065 21:22:34 -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:40.323 21:22:34 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.323 21:22:34 -- target/tls.sh@89 -- # jq -r .tls_version 00:21:40.323 21:22:34 -- target/tls.sh@89 -- # version=7 00:21:40.323 21:22:34 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:40.323 21:22:34 -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.323 21:22:34 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:40.581 21:22:34 -- target/tls.sh@96 -- # ktls=false 00:21:40.581 21:22:34 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:40.581 21:22:34 -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:40.581 21:22:34 -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.581 21:22:34 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:40.839 21:22:34 -- target/tls.sh@104 -- # ktls=true 00:21:40.839 21:22:34 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:40.839 21:22:34 -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:40.839 21:22:35 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.839 21:22:35 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:41.097 21:22:35 -- target/tls.sh@112 -- # ktls=false 00:21:41.097 21:22:35 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:41.097 21:22:35 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:41.097 21:22:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:41.097 21:22:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # digest=1 00:21:41.097 21:22:35 -- nvmf/common.sh@694 -- # python - 00:21:41.097 21:22:35 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:41.097 21:22:35 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:41.097 21:22:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:41.097 21:22:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:21:41.097 21:22:35 -- nvmf/common.sh@693 -- # digest=1 00:21:41.097 21:22:35 -- nvmf/common.sh@694 -- # python - 00:21:41.097 21:22:35 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:41.097 21:22:35 -- target/tls.sh@121 -- # mktemp 00:21:41.097 21:22:35 -- target/tls.sh@121 -- # key_path=/tmp/tmp.bZcTclfEgH 00:21:41.097 21:22:35 -- target/tls.sh@122 -- # mktemp 00:21:41.097 21:22:35 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.93avnLhUSa 00:21:41.097 21:22:35 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:41.097 21:22:35 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:41.097 21:22:35 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bZcTclfEgH 00:21:41.097 21:22:35 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.93avnLhUSa 00:21:41.097 21:22:35 -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:41.355 21:22:35 -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:41.613 21:22:35 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bZcTclfEgH 00:21:41.613 21:22:35 -- target/tls.sh@49 -- # local key=/tmp/tmp.bZcTclfEgH 00:21:41.613 21:22:35 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.613 [2024-04-23 21:22:35.835723] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.613 21:22:35 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:41.872 21:22:35 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:41.872 [2024-04-23 21:22:36.103762] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.872 [2024-04-23 21:22:36.104019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.872 21:22:36 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.132 malloc0 00:21:42.132 21:22:36 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.393 21:22:36 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bZcTclfEgH 00:21:42.393 [2024-04-23 21:22:36.534675] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.393 21:22:36 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bZcTclfEgH 00:21:42.393 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.610 Initializing NVMe Controllers 00:21:54.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:54.610 Initialization complete. Launching workers. 00:21:54.610 ======================================================== 00:21:54.610 Latency(us) 00:21:54.610 Device Information : IOPS MiB/s Average min max 00:21:54.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16907.34 66.04 3785.68 1102.15 5659.33 00:21:54.610 ======================================================== 00:21:54.610 Total : 16907.34 66.04 3785.68 1102.15 5659.33 00:21:54.610 00:21:54.610 21:22:46 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bZcTclfEgH 00:21:54.610 21:22:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:54.610 21:22:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:54.610 21:22:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:54.610 21:22:46 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bZcTclfEgH' 00:21:54.610 21:22:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.610 21:22:46 -- target/tls.sh@28 -- # bdevperf_pid=1493841 00:21:54.610 21:22:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.610 21:22:46 -- target/tls.sh@31 -- # waitforlisten 1493841 /var/tmp/bdevperf.sock 00:21:54.610 21:22:46 -- common/autotest_common.sh@817 -- # '[' -z 1493841 ']' 00:21:54.610 21:22:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.610 21:22:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:54.610 21:22:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.610 21:22:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:54.611 21:22:46 -- common/autotest_common.sh@10 -- # set +x 00:21:54.611 21:22:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:54.611 [2024-04-23 21:22:46.769691] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:21:54.611 [2024-04-23 21:22:46.769809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493841 ] 00:21:54.611 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.611 [2024-04-23 21:22:46.880509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.611 [2024-04-23 21:22:46.974802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.611 21:22:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:54.611 21:22:47 -- common/autotest_common.sh@850 -- # return 0 00:21:54.611 21:22:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bZcTclfEgH 00:21:54.611 [2024-04-23 21:22:47.569054] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.611 [2024-04-23 21:22:47.569165] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:54.611 TLSTESTn1 00:21:54.611 21:22:47 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:54.611 Running I/O for 10 seconds... 00:22:04.603 00:22:04.603 Latency(us) 00:22:04.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.604 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:04.604 Verification LBA range: start 0x0 length 0x2000 00:22:04.604 TLSTESTn1 : 10.04 3397.66 13.27 0.00 0.00 37583.28 6553.60 100994.43 00:22:04.604 =================================================================================================================== 00:22:04.604 Total : 3397.66 13.27 0.00 0.00 37583.28 6553.60 100994.43 00:22:04.604 0 00:22:04.604 21:22:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.604 21:22:57 -- target/tls.sh@45 -- # killprocess 1493841 00:22:04.604 21:22:57 -- common/autotest_common.sh@936 -- # '[' -z 1493841 ']' 00:22:04.604 21:22:57 -- common/autotest_common.sh@940 -- # kill -0 1493841 00:22:04.604 21:22:57 -- common/autotest_common.sh@941 -- # uname 00:22:04.604 21:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.604 21:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1493841 00:22:04.604 21:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:04.604 21:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:04.604 21:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1493841' 00:22:04.604 killing process with pid 1493841 00:22:04.604 21:22:57 -- common/autotest_common.sh@955 -- # kill 1493841 00:22:04.604 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.604 00:22:04.604 Latency(us) 00:22:04.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.604 =================================================================================================================== 00:22:04.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.604 [2024-04-23 21:22:57.841516] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.604 21:22:57 -- common/autotest_common.sh@960 -- # wait 1493841 00:22:04.604 21:22:58 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.93avnLhUSa 00:22:04.604 21:22:58 -- common/autotest_common.sh@638 -- # local es=0 00:22:04.604 21:22:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.93avnLhUSa 00:22:04.604 21:22:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:04.604 21:22:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:04.604 21:22:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:04.604 21:22:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:04.604 21:22:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.93avnLhUSa 00:22:04.604 21:22:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:04.604 21:22:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:04.604 21:22:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:04.604 21:22:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.93avnLhUSa' 00:22:04.604 21:22:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.604 21:22:58 -- target/tls.sh@28 -- # bdevperf_pid=1496158 00:22:04.604 21:22:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:04.604 21:22:58 -- target/tls.sh@31 -- # waitforlisten 1496158 /var/tmp/bdevperf.sock 00:22:04.604 21:22:58 -- common/autotest_common.sh@817 -- # '[' -z 1496158 ']' 00:22:04.604 21:22:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.604 21:22:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:04.604 21:22:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.604 21:22:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:04.604 21:22:58 -- common/autotest_common.sh@10 -- # set +x 00:22:04.604 21:22:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:04.604 [2024-04-23 21:22:58.282953] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:04.604 [2024-04-23 21:22:58.283070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496158 ] 00:22:04.604 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.604 [2024-04-23 21:22:58.393293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.604 [2024-04-23 21:22:58.487015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.866 21:22:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:04.866 21:22:58 -- common/autotest_common.sh@850 -- # return 0 00:22:04.866 21:22:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.93avnLhUSa 00:22:04.866 [2024-04-23 21:22:59.111966] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.866 [2024-04-23 21:22:59.112082] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.866 [2024-04-23 21:22:59.123785] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:04.866 [2024-04-23 21:22:59.124016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:22:04.866 [2024-04-23 21:22:59.124992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:22:04.866 [2024-04-23 21:22:59.125988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.866 [2024-04-23 21:22:59.126006] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:04.866 [2024-04-23 21:22:59.126018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.866 request: 00:22:04.866 { 00:22:04.866 "name": "TLSTEST", 00:22:04.866 "trtype": "tcp", 00:22:04.866 "traddr": "10.0.0.2", 00:22:04.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.866 "adrfam": "ipv4", 00:22:04.866 "trsvcid": "4420", 00:22:04.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.866 "psk": "/tmp/tmp.93avnLhUSa", 00:22:04.866 "method": "bdev_nvme_attach_controller", 00:22:04.866 "req_id": 1 00:22:04.866 } 00:22:04.866 Got JSON-RPC error response 00:22:04.866 response: 00:22:04.866 { 00:22:04.866 "code": -32602, 00:22:04.866 "message": "Invalid parameters" 00:22:04.866 } 00:22:05.125 21:22:59 -- target/tls.sh@36 -- # killprocess 1496158 00:22:05.125 21:22:59 -- common/autotest_common.sh@936 -- # '[' -z 1496158 ']' 00:22:05.125 21:22:59 -- common/autotest_common.sh@940 -- # kill -0 1496158 00:22:05.125 21:22:59 -- common/autotest_common.sh@941 -- # uname 00:22:05.125 21:22:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.125 21:22:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1496158 00:22:05.125 21:22:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:05.125 21:22:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:05.125 21:22:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1496158' 00:22:05.125 killing process with pid 1496158 00:22:05.125 21:22:59 -- common/autotest_common.sh@955 -- # kill 1496158 00:22:05.125 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.125 00:22:05.125 Latency(us) 00:22:05.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.125 =================================================================================================================== 00:22:05.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.125 [2024-04-23 21:22:59.200680] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:05.125 21:22:59 -- common/autotest_common.sh@960 -- # wait 1496158 00:22:05.384 21:22:59 -- target/tls.sh@37 -- # return 1 00:22:05.384 21:22:59 -- common/autotest_common.sh@641 -- # es=1 00:22:05.384 21:22:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:05.384 21:22:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:05.384 21:22:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:05.384 21:22:59 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bZcTclfEgH 00:22:05.384 21:22:59 -- common/autotest_common.sh@638 -- # local es=0 00:22:05.384 21:22:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bZcTclfEgH 00:22:05.384 21:22:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:05.384 21:22:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:05.384 21:22:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:05.384 21:22:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:05.384 21:22:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bZcTclfEgH 00:22:05.384 21:22:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.384 21:22:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.384 21:22:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:05.384 21:22:59 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bZcTclfEgH' 00:22:05.384 21:22:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.384 21:22:59 -- target/tls.sh@28 -- # bdevperf_pid=1496361 00:22:05.384 21:22:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.384 21:22:59 -- target/tls.sh@31 -- # waitforlisten 1496361 /var/tmp/bdevperf.sock 00:22:05.384 21:22:59 -- common/autotest_common.sh@817 -- # '[' -z 1496361 ']' 00:22:05.384 21:22:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.384 21:22:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.384 21:22:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.385 21:22:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.385 21:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:05.385 21:22:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.644 [2024-04-23 21:22:59.675784] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:05.644 [2024-04-23 21:22:59.675929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496361 ] 00:22:05.644 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.644 [2024-04-23 21:22:59.798012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.644 [2024-04-23 21:22:59.892977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.210 21:23:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:06.210 21:23:00 -- common/autotest_common.sh@850 -- # return 0 00:22:06.210 21:23:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bZcTclfEgH 00:22:06.471 [2024-04-23 21:23:00.496274] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.471 [2024-04-23 21:23:00.496394] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.471 [2024-04-23 21:23:00.503770] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:06.471 [2024-04-23 21:23:00.503800] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:06.471 [2024-04-23 21:23:00.503840] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.471 [2024-04-23 21:23:00.504188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:22:06.471 [2024-04-23 21:23:00.505167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:22:06.471 [2024-04-23 21:23:00.506159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.471 [2024-04-23 21:23:00.506177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:06.471 [2024-04-23 21:23:00.506189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.471 request: 00:22:06.471 { 00:22:06.471 "name": "TLSTEST", 00:22:06.471 "trtype": "tcp", 00:22:06.471 "traddr": "10.0.0.2", 00:22:06.471 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:06.471 "adrfam": "ipv4", 00:22:06.471 "trsvcid": "4420", 00:22:06.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.471 "psk": "/tmp/tmp.bZcTclfEgH", 00:22:06.471 "method": "bdev_nvme_attach_controller", 00:22:06.471 "req_id": 1 00:22:06.471 } 00:22:06.471 Got JSON-RPC error response 00:22:06.471 response: 00:22:06.471 { 00:22:06.471 "code": -32602, 00:22:06.471 "message": "Invalid parameters" 00:22:06.471 } 00:22:06.471 21:23:00 -- target/tls.sh@36 -- # killprocess 1496361 00:22:06.471 21:23:00 -- common/autotest_common.sh@936 -- # '[' -z 1496361 ']' 00:22:06.471 21:23:00 -- common/autotest_common.sh@940 -- # kill -0 1496361 00:22:06.471 21:23:00 -- common/autotest_common.sh@941 -- # uname 00:22:06.471 21:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:06.471 21:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1496361 00:22:06.471 21:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:06.471 21:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:06.471 21:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1496361' 00:22:06.471 killing process with pid 1496361 00:22:06.471 21:23:00 -- common/autotest_common.sh@955 -- # kill 1496361 00:22:06.471 Received shutdown signal, test time was about 10.000000 seconds 00:22:06.471 00:22:06.471 Latency(us) 00:22:06.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.471 =================================================================================================================== 00:22:06.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:06.471 [2024-04-23 21:23:00.565789] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:06.471 21:23:00 -- common/autotest_common.sh@960 -- # wait 1496361 00:22:06.733 21:23:00 -- target/tls.sh@37 -- # return 1 00:22:06.733 21:23:00 -- common/autotest_common.sh@641 -- # es=1 00:22:06.733 21:23:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:06.733 21:23:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:06.733 21:23:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:06.733 21:23:00 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bZcTclfEgH 00:22:06.733 21:23:00 -- common/autotest_common.sh@638 -- # local es=0 00:22:06.733 21:23:00 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bZcTclfEgH 00:22:06.733 21:23:00 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:06.733 21:23:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:06.733 21:23:00 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:06.733 21:23:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:06.733 21:23:00 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bZcTclfEgH 00:22:06.733 21:23:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:06.733 21:23:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:06.733 21:23:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:06.733 21:23:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bZcTclfEgH' 00:22:06.733 21:23:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:06.733 21:23:00 -- target/tls.sh@28 -- # bdevperf_pid=1496543 00:22:06.733 21:23:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:06.733 21:23:00 -- target/tls.sh@31 -- # waitforlisten 1496543 /var/tmp/bdevperf.sock 00:22:06.733 21:23:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:06.733 21:23:00 -- common/autotest_common.sh@817 -- # '[' -z 1496543 ']' 00:22:06.733 21:23:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.733 21:23:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.733 21:23:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.733 21:23:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.733 21:23:00 -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 [2024-04-23 21:23:00.993148] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:06.733 [2024-04-23 21:23:00.993266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496543 ] 00:22:06.993 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.993 [2024-04-23 21:23:01.106244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.993 [2024-04-23 21:23:01.202134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.562 21:23:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.562 21:23:01 -- common/autotest_common.sh@850 -- # return 0 00:22:07.562 21:23:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bZcTclfEgH 00:22:07.821 [2024-04-23 21:23:01.838588] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.821 [2024-04-23 21:23:01.838725] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.821 [2024-04-23 21:23:01.845930] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:07.821 [2024-04-23 21:23:01.845962] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:07.821 [2024-04-23 21:23:01.845996] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:07.821 [2024-04-23 21:23:01.846367] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:22:07.821 [2024-04-23 21:23:01.847346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:22:07.821 [2024-04-23 21:23:01.848340] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:07.821 [2024-04-23 21:23:01.848355] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:07.821 [2024-04-23 21:23:01.848367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:07.821 request: 00:22:07.821 { 00:22:07.821 "name": "TLSTEST", 00:22:07.821 "trtype": "tcp", 00:22:07.821 "traddr": "10.0.0.2", 00:22:07.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.821 "adrfam": "ipv4", 00:22:07.821 "trsvcid": "4420", 00:22:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.821 "psk": "/tmp/tmp.bZcTclfEgH", 00:22:07.821 "method": "bdev_nvme_attach_controller", 00:22:07.821 "req_id": 1 00:22:07.821 } 00:22:07.821 Got JSON-RPC error response 00:22:07.821 response: 00:22:07.821 { 00:22:07.821 "code": -32602, 00:22:07.821 "message": "Invalid parameters" 00:22:07.821 } 00:22:07.821 21:23:01 -- target/tls.sh@36 -- # killprocess 1496543 00:22:07.821 21:23:01 -- common/autotest_common.sh@936 -- # '[' -z 1496543 ']' 00:22:07.821 21:23:01 -- common/autotest_common.sh@940 -- # kill -0 1496543 00:22:07.821 21:23:01 -- common/autotest_common.sh@941 -- # uname 00:22:07.821 21:23:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:07.821 21:23:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1496543 00:22:07.821 21:23:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:07.821 21:23:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:07.821 21:23:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1496543' 00:22:07.821 killing process with pid 1496543 00:22:07.821 21:23:01 -- common/autotest_common.sh@955 -- # kill 1496543 00:22:07.821 Received shutdown signal, test time was about 10.000000 seconds 00:22:07.821 00:22:07.821 Latency(us) 00:22:07.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.821 =================================================================================================================== 00:22:07.821 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:07.821 [2024-04-23 21:23:01.901012] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:07.821 21:23:01 -- common/autotest_common.sh@960 -- # wait 1496543 00:22:08.080 21:23:02 -- target/tls.sh@37 -- # return 1 00:22:08.080 21:23:02 -- common/autotest_common.sh@641 -- # es=1 00:22:08.080 21:23:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:08.080 21:23:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:08.080 21:23:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:08.080 21:23:02 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:08.080 21:23:02 -- common/autotest_common.sh@638 -- # local es=0 00:22:08.080 21:23:02 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:08.080 21:23:02 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:08.080 21:23:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.080 21:23:02 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:08.080 21:23:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.080 21:23:02 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:08.080 21:23:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.080 21:23:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.080 21:23:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.081 21:23:02 -- target/tls.sh@23 -- # psk= 00:22:08.081 21:23:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.081 21:23:02 -- target/tls.sh@28 -- # bdevperf_pid=1496822 00:22:08.081 21:23:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.081 21:23:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.081 21:23:02 -- target/tls.sh@31 -- # waitforlisten 1496822 /var/tmp/bdevperf.sock 00:22:08.081 21:23:02 -- common/autotest_common.sh@817 -- # '[' -z 1496822 ']' 00:22:08.081 21:23:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.081 21:23:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:08.081 21:23:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.081 21:23:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:08.081 21:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:08.081 [2024-04-23 21:23:02.327712] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:08.081 [2024-04-23 21:23:02.327796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496822 ] 00:22:08.340 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.340 [2024-04-23 21:23:02.439833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.340 [2024-04-23 21:23:02.537210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.911 21:23:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.911 21:23:03 -- common/autotest_common.sh@850 -- # return 0 00:22:08.911 21:23:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:09.171 [2024-04-23 21:23:03.204150] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:09.171 [2024-04-23 21:23:03.205551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:22:09.171 [2024-04-23 21:23:03.206543] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:09.171 [2024-04-23 21:23:03.206559] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:09.171 [2024-04-23 21:23:03.206575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:09.171 request: 00:22:09.171 { 00:22:09.171 "name": "TLSTEST", 00:22:09.171 "trtype": "tcp", 00:22:09.171 "traddr": "10.0.0.2", 00:22:09.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.171 "adrfam": "ipv4", 00:22:09.171 "trsvcid": "4420", 00:22:09.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.171 "method": "bdev_nvme_attach_controller", 00:22:09.171 "req_id": 1 00:22:09.171 } 00:22:09.171 Got JSON-RPC error response 00:22:09.171 response: 00:22:09.171 { 00:22:09.171 "code": -32602, 00:22:09.171 "message": "Invalid parameters" 00:22:09.171 } 00:22:09.171 21:23:03 -- target/tls.sh@36 -- # killprocess 1496822 00:22:09.171 21:23:03 -- common/autotest_common.sh@936 -- # '[' -z 1496822 ']' 00:22:09.171 21:23:03 -- common/autotest_common.sh@940 -- # kill -0 1496822 00:22:09.171 21:23:03 -- common/autotest_common.sh@941 -- # uname 00:22:09.171 21:23:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:09.171 21:23:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1496822 00:22:09.171 21:23:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:09.171 21:23:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:09.171 21:23:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1496822' 00:22:09.171 killing process with pid 1496822 00:22:09.171 21:23:03 -- common/autotest_common.sh@955 -- # kill 1496822 00:22:09.171 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.171 00:22:09.171 Latency(us) 00:22:09.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.171 =================================================================================================================== 00:22:09.171 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.171 21:23:03 -- common/autotest_common.sh@960 -- # wait 1496822 00:22:09.430 21:23:03 -- target/tls.sh@37 -- # return 1 00:22:09.430 21:23:03 -- common/autotest_common.sh@641 -- # es=1 00:22:09.430 21:23:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:09.430 21:23:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:09.430 21:23:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:09.430 21:23:03 -- target/tls.sh@158 -- # killprocess 1491151 00:22:09.430 21:23:03 -- common/autotest_common.sh@936 -- # '[' -z 1491151 ']' 00:22:09.430 21:23:03 -- common/autotest_common.sh@940 -- # kill -0 1491151 00:22:09.430 21:23:03 -- common/autotest_common.sh@941 -- # uname 00:22:09.430 21:23:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:09.430 21:23:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1491151 00:22:09.430 21:23:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:09.430 21:23:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:09.430 21:23:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1491151' 00:22:09.430 killing process with pid 1491151 00:22:09.430 21:23:03 -- common/autotest_common.sh@955 -- # kill 1491151 00:22:09.430 [2024-04-23 21:23:03.664112] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:09.430 21:23:03 -- common/autotest_common.sh@960 -- # wait 1491151 00:22:10.134 21:23:04 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:10.134 21:23:04 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:10.134 21:23:04 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:10.134 21:23:04 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:10.134 21:23:04 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:10.134 21:23:04 -- nvmf/common.sh@693 -- # digest=2 00:22:10.134 21:23:04 -- nvmf/common.sh@694 -- # python - 00:22:10.134 21:23:04 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:10.134 21:23:04 -- target/tls.sh@160 -- # mktemp 00:22:10.134 21:23:04 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.SPeQoxy81J 00:22:10.134 21:23:04 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:10.134 21:23:04 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.SPeQoxy81J 00:22:10.134 21:23:04 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:10.134 21:23:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:10.134 21:23:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:10.134 21:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:10.134 21:23:04 -- nvmf/common.sh@470 -- # nvmfpid=1497310 00:22:10.134 21:23:04 -- nvmf/common.sh@471 -- # waitforlisten 1497310 00:22:10.134 21:23:04 -- common/autotest_common.sh@817 -- # '[' -z 1497310 ']' 00:22:10.134 21:23:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.134 21:23:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:10.134 21:23:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.134 21:23:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:10.134 21:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:10.134 21:23:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:10.134 [2024-04-23 21:23:04.343129] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:10.134 [2024-04-23 21:23:04.343244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.396 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.396 [2024-04-23 21:23:04.478254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.396 [2024-04-23 21:23:04.573662] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.396 [2024-04-23 21:23:04.573711] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.396 [2024-04-23 21:23:04.573722] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.396 [2024-04-23 21:23:04.573732] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.396 [2024-04-23 21:23:04.573740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.396 [2024-04-23 21:23:04.573771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.969 21:23:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:10.969 21:23:05 -- common/autotest_common.sh@850 -- # return 0 00:22:10.969 21:23:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:10.969 21:23:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:10.969 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:22:10.969 21:23:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.969 21:23:05 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:10.969 21:23:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.SPeQoxy81J 00:22:10.969 21:23:05 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.229 [2024-04-23 21:23:05.293448] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.229 21:23:05 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:11.229 21:23:05 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:11.488 [2024-04-23 21:23:05.573505] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.488 [2024-04-23 21:23:05.573754] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.488 21:23:05 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:11.488 malloc0 00:22:11.488 21:23:05 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:11.747 21:23:05 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:11.747 [2024-04-23 21:23:06.008560] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:12.007 21:23:06 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SPeQoxy81J 00:22:12.007 21:23:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.007 21:23:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.007 21:23:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.007 21:23:06 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SPeQoxy81J' 00:22:12.007 21:23:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.007 21:23:06 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.007 21:23:06 -- target/tls.sh@28 -- # bdevperf_pid=1497750 00:22:12.008 21:23:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.008 21:23:06 -- target/tls.sh@31 -- # waitforlisten 1497750 /var/tmp/bdevperf.sock 00:22:12.008 21:23:06 -- common/autotest_common.sh@817 -- # '[' -z 1497750 ']' 00:22:12.008 21:23:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.008 21:23:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:12.008 21:23:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.008 21:23:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:12.008 21:23:06 -- common/autotest_common.sh@10 -- # set +x 00:22:12.008 [2024-04-23 21:23:06.094661] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:12.008 [2024-04-23 21:23:06.094774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497750 ] 00:22:12.008 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.008 [2024-04-23 21:23:06.209491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.269 [2024-04-23 21:23:06.303480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.531 21:23:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.531 21:23:06 -- common/autotest_common.sh@850 -- # return 0 00:22:12.531 21:23:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:12.790 [2024-04-23 21:23:06.929152] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.790 [2024-04-23 21:23:06.929284] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.790 TLSTESTn1 00:22:12.790 21:23:07 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.049 Running I/O for 10 seconds... 00:22:23.031 00:22:23.031 Latency(us) 00:22:23.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.031 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.031 Verification LBA range: start 0x0 length 0x2000 00:22:23.031 TLSTESTn1 : 10.03 3470.37 13.56 0.00 0.00 36812.39 5311.87 100442.54 00:22:23.031 =================================================================================================================== 00:22:23.031 Total : 3470.37 13.56 0.00 0.00 36812.39 5311.87 100442.54 00:22:23.031 0 00:22:23.031 21:23:17 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.031 21:23:17 -- target/tls.sh@45 -- # killprocess 1497750 00:22:23.031 21:23:17 -- common/autotest_common.sh@936 -- # '[' -z 1497750 ']' 00:22:23.031 21:23:17 -- common/autotest_common.sh@940 -- # kill -0 1497750 00:22:23.031 21:23:17 -- common/autotest_common.sh@941 -- # uname 00:22:23.031 21:23:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.031 21:23:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1497750 00:22:23.031 21:23:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:23.031 21:23:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:23.031 21:23:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1497750' 00:22:23.031 killing process with pid 1497750 00:22:23.031 21:23:17 -- common/autotest_common.sh@955 -- # kill 1497750 00:22:23.031 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.031 00:22:23.031 Latency(us) 00:22:23.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.031 =================================================================================================================== 00:22:23.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.031 [2024-04-23 21:23:17.203286] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.031 21:23:17 -- common/autotest_common.sh@960 -- # wait 1497750 00:22:23.600 21:23:17 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.SPeQoxy81J 00:22:23.600 21:23:17 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SPeQoxy81J 00:22:23.600 21:23:17 -- common/autotest_common.sh@638 -- # local es=0 00:22:23.600 21:23:17 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SPeQoxy81J 00:22:23.600 21:23:17 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:23.600 21:23:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:23.600 21:23:17 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:23.600 21:23:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:23.600 21:23:17 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SPeQoxy81J 00:22:23.600 21:23:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:23.600 21:23:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:23.600 21:23:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:23.600 21:23:17 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SPeQoxy81J' 00:22:23.600 21:23:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:23.600 21:23:17 -- target/tls.sh@28 -- # bdevperf_pid=1499823 00:22:23.600 21:23:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.600 21:23:17 -- target/tls.sh@31 -- # waitforlisten 1499823 /var/tmp/bdevperf.sock 00:22:23.600 21:23:17 -- common/autotest_common.sh@817 -- # '[' -z 1499823 ']' 00:22:23.600 21:23:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.600 21:23:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:23.600 21:23:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.600 21:23:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:23.600 21:23:17 -- common/autotest_common.sh@10 -- # set +x 00:22:23.600 21:23:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:23.600 [2024-04-23 21:23:17.660805] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:23.600 [2024-04-23 21:23:17.660929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499823 ] 00:22:23.600 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.600 [2024-04-23 21:23:17.782217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.861 [2024-04-23 21:23:17.874249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.122 21:23:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:24.122 21:23:18 -- common/autotest_common.sh@850 -- # return 0 00:22:24.122 21:23:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:24.381 [2024-04-23 21:23:18.515144] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.381 [2024-04-23 21:23:18.515210] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:24.381 [2024-04-23 21:23:18.515226] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.SPeQoxy81J 00:22:24.381 request: 00:22:24.381 { 00:22:24.381 "name": "TLSTEST", 00:22:24.381 "trtype": "tcp", 00:22:24.381 "traddr": "10.0.0.2", 00:22:24.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.381 "adrfam": "ipv4", 00:22:24.381 "trsvcid": "4420", 00:22:24.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.381 "psk": "/tmp/tmp.SPeQoxy81J", 00:22:24.381 "method": "bdev_nvme_attach_controller", 00:22:24.381 "req_id": 1 00:22:24.381 } 00:22:24.381 Got JSON-RPC error response 00:22:24.381 response: 00:22:24.381 { 00:22:24.381 "code": -1, 00:22:24.381 "message": "Operation not permitted" 00:22:24.381 } 00:22:24.381 21:23:18 -- target/tls.sh@36 -- # killprocess 1499823 00:22:24.381 21:23:18 -- common/autotest_common.sh@936 -- # '[' -z 1499823 ']' 00:22:24.381 21:23:18 -- common/autotest_common.sh@940 -- # kill -0 1499823 00:22:24.381 21:23:18 -- common/autotest_common.sh@941 -- # uname 00:22:24.381 21:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.381 21:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1499823 00:22:24.381 21:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:24.381 21:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:24.381 21:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1499823' 00:22:24.381 killing process with pid 1499823 00:22:24.381 21:23:18 -- common/autotest_common.sh@955 -- # kill 1499823 00:22:24.381 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.381 00:22:24.381 Latency(us) 00:22:24.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.381 =================================================================================================================== 00:22:24.381 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.381 21:23:18 -- common/autotest_common.sh@960 -- # wait 1499823 00:22:24.948 21:23:18 -- target/tls.sh@37 -- # return 1 00:22:24.948 21:23:18 -- common/autotest_common.sh@641 -- # es=1 00:22:24.948 21:23:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:24.948 21:23:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:24.948 21:23:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:24.948 21:23:18 -- target/tls.sh@174 -- # killprocess 1497310 00:22:24.948 21:23:18 -- common/autotest_common.sh@936 -- # '[' -z 1497310 ']' 00:22:24.948 21:23:18 -- common/autotest_common.sh@940 -- # kill -0 1497310 00:22:24.948 21:23:18 -- common/autotest_common.sh@941 -- # uname 00:22:24.948 21:23:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.948 21:23:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1497310 00:22:24.948 21:23:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:24.948 21:23:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:24.948 21:23:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1497310' 00:22:24.948 killing process with pid 1497310 00:22:24.948 21:23:18 -- common/autotest_common.sh@955 -- # kill 1497310 00:22:24.948 [2024-04-23 21:23:18.980713] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.948 21:23:18 -- common/autotest_common.sh@960 -- # wait 1497310 00:22:25.517 21:23:19 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:25.518 21:23:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:25.518 21:23:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:25.518 21:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:25.518 21:23:19 -- nvmf/common.sh@470 -- # nvmfpid=1500153 00:22:25.518 21:23:19 -- nvmf/common.sh@471 -- # waitforlisten 1500153 00:22:25.518 21:23:19 -- common/autotest_common.sh@817 -- # '[' -z 1500153 ']' 00:22:25.518 21:23:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.518 21:23:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.518 21:23:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.518 21:23:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.518 21:23:19 -- common/autotest_common.sh@10 -- # set +x 00:22:25.518 21:23:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:25.518 [2024-04-23 21:23:19.563507] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:25.518 [2024-04-23 21:23:19.563587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.518 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.518 [2024-04-23 21:23:19.658074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.518 [2024-04-23 21:23:19.749232] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.518 [2024-04-23 21:23:19.749267] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.518 [2024-04-23 21:23:19.749276] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.518 [2024-04-23 21:23:19.749286] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.518 [2024-04-23 21:23:19.749293] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.518 [2024-04-23 21:23:19.749321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.092 21:23:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.092 21:23:20 -- common/autotest_common.sh@850 -- # return 0 00:22:26.092 21:23:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:26.092 21:23:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.092 21:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:26.092 21:23:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.092 21:23:20 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:26.092 21:23:20 -- common/autotest_common.sh@638 -- # local es=0 00:22:26.092 21:23:20 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:26.092 21:23:20 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:22:26.092 21:23:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.092 21:23:20 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:22:26.092 21:23:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.092 21:23:20 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:26.092 21:23:20 -- target/tls.sh@49 -- # local key=/tmp/tmp.SPeQoxy81J 00:22:26.092 21:23:20 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:26.350 [2024-04-23 21:23:20.427011] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.350 21:23:20 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:26.350 21:23:20 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.608 [2024-04-23 21:23:20.711031] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.608 [2024-04-23 21:23:20.711251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.608 21:23:20 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.608 malloc0 00:22:26.867 21:23:20 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.867 21:23:21 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:26.867 [2024-04-23 21:23:21.133359] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:26.867 [2024-04-23 21:23:21.133393] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:26.867 [2024-04-23 21:23:21.133420] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:26.867 request: 00:22:26.867 { 00:22:26.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.867 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.867 "psk": "/tmp/tmp.SPeQoxy81J", 00:22:26.867 "method": "nvmf_subsystem_add_host", 00:22:26.867 "req_id": 1 00:22:26.867 } 00:22:26.867 Got JSON-RPC error response 00:22:26.867 response: 00:22:26.867 { 00:22:26.867 "code": -32603, 00:22:26.867 "message": "Internal error" 00:22:26.867 } 00:22:27.127 21:23:21 -- common/autotest_common.sh@641 -- # es=1 00:22:27.127 21:23:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:27.127 21:23:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:27.127 21:23:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:27.127 21:23:21 -- target/tls.sh@180 -- # killprocess 1500153 00:22:27.127 21:23:21 -- common/autotest_common.sh@936 -- # '[' -z 1500153 ']' 00:22:27.127 21:23:21 -- common/autotest_common.sh@940 -- # kill -0 1500153 00:22:27.127 21:23:21 -- common/autotest_common.sh@941 -- # uname 00:22:27.127 21:23:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.127 21:23:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1500153 00:22:27.127 21:23:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:27.127 21:23:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:27.127 21:23:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1500153' 00:22:27.127 killing process with pid 1500153 00:22:27.127 21:23:21 -- common/autotest_common.sh@955 -- # kill 1500153 00:22:27.127 21:23:21 -- common/autotest_common.sh@960 -- # wait 1500153 00:22:27.698 21:23:21 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.SPeQoxy81J 00:22:27.698 21:23:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:27.698 21:23:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:27.698 21:23:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:27.698 21:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:27.698 21:23:21 -- nvmf/common.sh@470 -- # nvmfpid=1500750 00:22:27.698 21:23:21 -- nvmf/common.sh@471 -- # waitforlisten 1500750 00:22:27.698 21:23:21 -- common/autotest_common.sh@817 -- # '[' -z 1500750 ']' 00:22:27.698 21:23:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.698 21:23:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.698 21:23:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:27.698 21:23:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.698 21:23:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.698 21:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:27.698 [2024-04-23 21:23:21.805042] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:27.698 [2024-04-23 21:23:21.805187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.698 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.698 [2024-04-23 21:23:21.949695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.960 [2024-04-23 21:23:22.045739] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.960 [2024-04-23 21:23:22.045792] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.960 [2024-04-23 21:23:22.045802] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.960 [2024-04-23 21:23:22.045816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.960 [2024-04-23 21:23:22.045824] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.960 [2024-04-23 21:23:22.045860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.526 21:23:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.526 21:23:22 -- common/autotest_common.sh@850 -- # return 0 00:22:28.526 21:23:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:28.526 21:23:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.526 21:23:22 -- common/autotest_common.sh@10 -- # set +x 00:22:28.526 21:23:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.526 21:23:22 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:28.526 21:23:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.SPeQoxy81J 00:22:28.526 21:23:22 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:28.526 [2024-04-23 21:23:22.657437] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.526 21:23:22 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:28.785 21:23:22 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:28.785 [2024-04-23 21:23:22.909482] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.785 [2024-04-23 21:23:22.909731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.786 21:23:22 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.786 malloc0 00:22:29.045 21:23:23 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:29.045 21:23:23 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:29.045 [2024-04-23 21:23:23.305955] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:29.307 21:23:23 -- target/tls.sh@188 -- # bdevperf_pid=1501076 00:22:29.307 21:23:23 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.307 21:23:23 -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.307 21:23:23 -- target/tls.sh@191 -- # waitforlisten 1501076 /var/tmp/bdevperf.sock 00:22:29.307 21:23:23 -- common/autotest_common.sh@817 -- # '[' -z 1501076 ']' 00:22:29.307 21:23:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.307 21:23:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:29.307 21:23:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.307 21:23:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:29.307 21:23:23 -- common/autotest_common.sh@10 -- # set +x 00:22:29.307 [2024-04-23 21:23:23.386907] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:29.307 [2024-04-23 21:23:23.387020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501076 ] 00:22:29.307 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.307 [2024-04-23 21:23:23.499512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.569 [2024-04-23 21:23:23.594886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.830 21:23:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:29.830 21:23:24 -- common/autotest_common.sh@850 -- # return 0 00:22:29.830 21:23:24 -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:30.090 [2024-04-23 21:23:24.198044] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.090 [2024-04-23 21:23:24.198158] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:30.090 TLSTESTn1 00:22:30.090 21:23:24 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:22:30.349 21:23:24 -- target/tls.sh@196 -- # tgtconf='{ 00:22:30.349 "subsystems": [ 00:22:30.349 { 00:22:30.349 "subsystem": "keyring", 00:22:30.349 "config": [] 00:22:30.349 }, 00:22:30.349 { 00:22:30.349 "subsystem": "iobuf", 00:22:30.349 "config": [ 00:22:30.349 { 00:22:30.350 "method": "iobuf_set_options", 00:22:30.350 "params": { 00:22:30.350 "small_pool_count": 8192, 00:22:30.350 "large_pool_count": 1024, 00:22:30.350 "small_bufsize": 8192, 00:22:30.350 "large_bufsize": 135168 00:22:30.350 } 00:22:30.350 } 00:22:30.350 ] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "sock", 00:22:30.350 "config": [ 00:22:30.350 { 00:22:30.350 "method": "sock_impl_set_options", 00:22:30.350 "params": { 00:22:30.350 "impl_name": "posix", 00:22:30.350 "recv_buf_size": 2097152, 00:22:30.350 "send_buf_size": 2097152, 00:22:30.350 "enable_recv_pipe": true, 00:22:30.350 "enable_quickack": false, 00:22:30.350 "enable_placement_id": 0, 00:22:30.350 "enable_zerocopy_send_server": true, 00:22:30.350 "enable_zerocopy_send_client": false, 00:22:30.350 "zerocopy_threshold": 0, 00:22:30.350 "tls_version": 0, 00:22:30.350 "enable_ktls": false 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "sock_impl_set_options", 00:22:30.350 "params": { 00:22:30.350 "impl_name": "ssl", 00:22:30.350 "recv_buf_size": 4096, 00:22:30.350 "send_buf_size": 4096, 00:22:30.350 "enable_recv_pipe": true, 00:22:30.350 "enable_quickack": false, 00:22:30.350 "enable_placement_id": 0, 00:22:30.350 "enable_zerocopy_send_server": true, 00:22:30.350 "enable_zerocopy_send_client": false, 00:22:30.350 "zerocopy_threshold": 0, 00:22:30.350 "tls_version": 0, 00:22:30.350 "enable_ktls": false 00:22:30.350 } 00:22:30.350 } 00:22:30.350 ] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "vmd", 00:22:30.350 "config": [] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "accel", 00:22:30.350 "config": [ 00:22:30.350 { 00:22:30.350 "method": "accel_set_options", 00:22:30.350 "params": { 00:22:30.350 "small_cache_size": 128, 00:22:30.350 "large_cache_size": 16, 00:22:30.350 "task_count": 2048, 00:22:30.350 "sequence_count": 2048, 00:22:30.350 "buf_count": 2048 00:22:30.350 } 00:22:30.350 } 00:22:30.350 ] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "bdev", 00:22:30.350 "config": [ 00:22:30.350 { 00:22:30.350 "method": "bdev_set_options", 00:22:30.350 "params": { 00:22:30.350 "bdev_io_pool_size": 65535, 00:22:30.350 "bdev_io_cache_size": 256, 00:22:30.350 "bdev_auto_examine": true, 00:22:30.350 "iobuf_small_cache_size": 128, 00:22:30.350 "iobuf_large_cache_size": 16 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_raid_set_options", 00:22:30.350 "params": { 00:22:30.350 "process_window_size_kb": 1024 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_iscsi_set_options", 00:22:30.350 "params": { 00:22:30.350 "timeout_sec": 30 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_nvme_set_options", 00:22:30.350 "params": { 00:22:30.350 "action_on_timeout": "none", 00:22:30.350 "timeout_us": 0, 00:22:30.350 "timeout_admin_us": 0, 00:22:30.350 "keep_alive_timeout_ms": 10000, 00:22:30.350 "arbitration_burst": 0, 00:22:30.350 "low_priority_weight": 0, 00:22:30.350 "medium_priority_weight": 0, 00:22:30.350 "high_priority_weight": 0, 00:22:30.350 "nvme_adminq_poll_period_us": 10000, 00:22:30.350 "nvme_ioq_poll_period_us": 0, 00:22:30.350 "io_queue_requests": 0, 00:22:30.350 "delay_cmd_submit": true, 00:22:30.350 "transport_retry_count": 4, 00:22:30.350 "bdev_retry_count": 3, 00:22:30.350 "transport_ack_timeout": 0, 00:22:30.350 "ctrlr_loss_timeout_sec": 0, 00:22:30.350 "reconnect_delay_sec": 0, 00:22:30.350 "fast_io_fail_timeout_sec": 0, 00:22:30.350 "disable_auto_failback": false, 00:22:30.350 "generate_uuids": false, 00:22:30.350 "transport_tos": 0, 00:22:30.350 "nvme_error_stat": false, 00:22:30.350 "rdma_srq_size": 0, 00:22:30.350 "io_path_stat": false, 00:22:30.350 "allow_accel_sequence": false, 00:22:30.350 "rdma_max_cq_size": 0, 00:22:30.350 "rdma_cm_event_timeout_ms": 0, 00:22:30.350 "dhchap_digests": [ 00:22:30.350 "sha256", 00:22:30.350 "sha384", 00:22:30.350 "sha512" 00:22:30.350 ], 00:22:30.350 "dhchap_dhgroups": [ 00:22:30.350 "null", 00:22:30.350 "ffdhe2048", 00:22:30.350 "ffdhe3072", 00:22:30.350 "ffdhe4096", 00:22:30.350 "ffdhe6144", 00:22:30.350 "ffdhe8192" 00:22:30.350 ] 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_nvme_set_hotplug", 00:22:30.350 "params": { 00:22:30.350 "period_us": 100000, 00:22:30.350 "enable": false 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_malloc_create", 00:22:30.350 "params": { 00:22:30.350 "name": "malloc0", 00:22:30.350 "num_blocks": 8192, 00:22:30.350 "block_size": 4096, 00:22:30.350 "physical_block_size": 4096, 00:22:30.350 "uuid": "88fcf1a6-2f2d-44a1-8dbc-de3eac8d50fb", 00:22:30.350 "optimal_io_boundary": 0 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "bdev_wait_for_examine" 00:22:30.350 } 00:22:30.350 ] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "nbd", 00:22:30.350 "config": [] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "scheduler", 00:22:30.350 "config": [ 00:22:30.350 { 00:22:30.350 "method": "framework_set_scheduler", 00:22:30.350 "params": { 00:22:30.350 "name": "static" 00:22:30.350 } 00:22:30.350 } 00:22:30.350 ] 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "subsystem": "nvmf", 00:22:30.350 "config": [ 00:22:30.350 { 00:22:30.350 "method": "nvmf_set_config", 00:22:30.350 "params": { 00:22:30.350 "discovery_filter": "match_any", 00:22:30.350 "admin_cmd_passthru": { 00:22:30.350 "identify_ctrlr": false 00:22:30.350 } 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_set_max_subsystems", 00:22:30.350 "params": { 00:22:30.350 "max_subsystems": 1024 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_set_crdt", 00:22:30.350 "params": { 00:22:30.350 "crdt1": 0, 00:22:30.350 "crdt2": 0, 00:22:30.350 "crdt3": 0 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_create_transport", 00:22:30.350 "params": { 00:22:30.350 "trtype": "TCP", 00:22:30.350 "max_queue_depth": 128, 00:22:30.350 "max_io_qpairs_per_ctrlr": 127, 00:22:30.350 "in_capsule_data_size": 4096, 00:22:30.350 "max_io_size": 131072, 00:22:30.350 "io_unit_size": 131072, 00:22:30.350 "max_aq_depth": 128, 00:22:30.350 "num_shared_buffers": 511, 00:22:30.350 "buf_cache_size": 4294967295, 00:22:30.350 "dif_insert_or_strip": false, 00:22:30.350 "zcopy": false, 00:22:30.350 "c2h_success": false, 00:22:30.350 "sock_priority": 0, 00:22:30.350 "abort_timeout_sec": 1, 00:22:30.350 "ack_timeout": 0, 00:22:30.350 "data_wr_pool_size": 0 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_create_subsystem", 00:22:30.350 "params": { 00:22:30.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.350 "allow_any_host": false, 00:22:30.350 "serial_number": "SPDK00000000000001", 00:22:30.350 "model_number": "SPDK bdev Controller", 00:22:30.350 "max_namespaces": 10, 00:22:30.350 "min_cntlid": 1, 00:22:30.350 "max_cntlid": 65519, 00:22:30.350 "ana_reporting": false 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_subsystem_add_host", 00:22:30.350 "params": { 00:22:30.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.350 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.350 "psk": "/tmp/tmp.SPeQoxy81J" 00:22:30.350 } 00:22:30.350 }, 00:22:30.350 { 00:22:30.350 "method": "nvmf_subsystem_add_ns", 00:22:30.350 "params": { 00:22:30.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.351 "namespace": { 00:22:30.351 "nsid": 1, 00:22:30.351 "bdev_name": "malloc0", 00:22:30.351 "nguid": "88FCF1A62F2D44A18DBCDE3EAC8D50FB", 00:22:30.351 "uuid": "88fcf1a6-2f2d-44a1-8dbc-de3eac8d50fb", 00:22:30.351 "no_auto_visible": false 00:22:30.351 } 00:22:30.351 } 00:22:30.351 }, 00:22:30.351 { 00:22:30.351 "method": "nvmf_subsystem_add_listener", 00:22:30.351 "params": { 00:22:30.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.351 "listen_address": { 00:22:30.351 "trtype": "TCP", 00:22:30.351 "adrfam": "IPv4", 00:22:30.351 "traddr": "10.0.0.2", 00:22:30.351 "trsvcid": "4420" 00:22:30.351 }, 00:22:30.351 "secure_channel": true 00:22:30.351 } 00:22:30.351 } 00:22:30.351 ] 00:22:30.351 } 00:22:30.351 ] 00:22:30.351 }' 00:22:30.351 21:23:24 -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:30.610 21:23:24 -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:30.610 "subsystems": [ 00:22:30.610 { 00:22:30.610 "subsystem": "keyring", 00:22:30.610 "config": [] 00:22:30.610 }, 00:22:30.610 { 00:22:30.610 "subsystem": "iobuf", 00:22:30.610 "config": [ 00:22:30.610 { 00:22:30.610 "method": "iobuf_set_options", 00:22:30.610 "params": { 00:22:30.610 "small_pool_count": 8192, 00:22:30.610 "large_pool_count": 1024, 00:22:30.610 "small_bufsize": 8192, 00:22:30.610 "large_bufsize": 135168 00:22:30.610 } 00:22:30.610 } 00:22:30.610 ] 00:22:30.610 }, 00:22:30.610 { 00:22:30.610 "subsystem": "sock", 00:22:30.610 "config": [ 00:22:30.610 { 00:22:30.610 "method": "sock_impl_set_options", 00:22:30.610 "params": { 00:22:30.610 "impl_name": "posix", 00:22:30.610 "recv_buf_size": 2097152, 00:22:30.610 "send_buf_size": 2097152, 00:22:30.610 "enable_recv_pipe": true, 00:22:30.610 "enable_quickack": false, 00:22:30.610 "enable_placement_id": 0, 00:22:30.610 "enable_zerocopy_send_server": true, 00:22:30.610 "enable_zerocopy_send_client": false, 00:22:30.610 "zerocopy_threshold": 0, 00:22:30.610 "tls_version": 0, 00:22:30.610 "enable_ktls": false 00:22:30.610 } 00:22:30.610 }, 00:22:30.610 { 00:22:30.610 "method": "sock_impl_set_options", 00:22:30.610 "params": { 00:22:30.610 "impl_name": "ssl", 00:22:30.610 "recv_buf_size": 4096, 00:22:30.610 "send_buf_size": 4096, 00:22:30.610 "enable_recv_pipe": true, 00:22:30.610 "enable_quickack": false, 00:22:30.610 "enable_placement_id": 0, 00:22:30.610 "enable_zerocopy_send_server": true, 00:22:30.610 "enable_zerocopy_send_client": false, 00:22:30.610 "zerocopy_threshold": 0, 00:22:30.610 "tls_version": 0, 00:22:30.610 "enable_ktls": false 00:22:30.610 } 00:22:30.610 } 00:22:30.611 ] 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "subsystem": "vmd", 00:22:30.611 "config": [] 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "subsystem": "accel", 00:22:30.611 "config": [ 00:22:30.611 { 00:22:30.611 "method": "accel_set_options", 00:22:30.611 "params": { 00:22:30.611 "small_cache_size": 128, 00:22:30.611 "large_cache_size": 16, 00:22:30.611 "task_count": 2048, 00:22:30.611 "sequence_count": 2048, 00:22:30.611 "buf_count": 2048 00:22:30.611 } 00:22:30.611 } 00:22:30.611 ] 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "subsystem": "bdev", 00:22:30.611 "config": [ 00:22:30.611 { 00:22:30.611 "method": "bdev_set_options", 00:22:30.611 "params": { 00:22:30.611 "bdev_io_pool_size": 65535, 00:22:30.611 "bdev_io_cache_size": 256, 00:22:30.611 "bdev_auto_examine": true, 00:22:30.611 "iobuf_small_cache_size": 128, 00:22:30.611 "iobuf_large_cache_size": 16 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_raid_set_options", 00:22:30.611 "params": { 00:22:30.611 "process_window_size_kb": 1024 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_iscsi_set_options", 00:22:30.611 "params": { 00:22:30.611 "timeout_sec": 30 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_nvme_set_options", 00:22:30.611 "params": { 00:22:30.611 "action_on_timeout": "none", 00:22:30.611 "timeout_us": 0, 00:22:30.611 "timeout_admin_us": 0, 00:22:30.611 "keep_alive_timeout_ms": 10000, 00:22:30.611 "arbitration_burst": 0, 00:22:30.611 "low_priority_weight": 0, 00:22:30.611 "medium_priority_weight": 0, 00:22:30.611 "high_priority_weight": 0, 00:22:30.611 "nvme_adminq_poll_period_us": 10000, 00:22:30.611 "nvme_ioq_poll_period_us": 0, 00:22:30.611 "io_queue_requests": 512, 00:22:30.611 "delay_cmd_submit": true, 00:22:30.611 "transport_retry_count": 4, 00:22:30.611 "bdev_retry_count": 3, 00:22:30.611 "transport_ack_timeout": 0, 00:22:30.611 "ctrlr_loss_timeout_sec": 0, 00:22:30.611 "reconnect_delay_sec": 0, 00:22:30.611 "fast_io_fail_timeout_sec": 0, 00:22:30.611 "disable_auto_failback": false, 00:22:30.611 "generate_uuids": false, 00:22:30.611 "transport_tos": 0, 00:22:30.611 "nvme_error_stat": false, 00:22:30.611 "rdma_srq_size": 0, 00:22:30.611 "io_path_stat": false, 00:22:30.611 "allow_accel_sequence": false, 00:22:30.611 "rdma_max_cq_size": 0, 00:22:30.611 "rdma_cm_event_timeout_ms": 0, 00:22:30.611 "dhchap_digests": [ 00:22:30.611 "sha256", 00:22:30.611 "sha384", 00:22:30.611 "sha512" 00:22:30.611 ], 00:22:30.611 "dhchap_dhgroups": [ 00:22:30.611 "null", 00:22:30.611 "ffdhe2048", 00:22:30.611 "ffdhe3072", 00:22:30.611 "ffdhe4096", 00:22:30.611 "ffdhe6144", 00:22:30.611 "ffdhe8192" 00:22:30.611 ] 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_nvme_attach_controller", 00:22:30.611 "params": { 00:22:30.611 "name": "TLSTEST", 00:22:30.611 "trtype": "TCP", 00:22:30.611 "adrfam": "IPv4", 00:22:30.611 "traddr": "10.0.0.2", 00:22:30.611 "trsvcid": "4420", 00:22:30.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.611 "prchk_reftag": false, 00:22:30.611 "prchk_guard": false, 00:22:30.611 "ctrlr_loss_timeout_sec": 0, 00:22:30.611 "reconnect_delay_sec": 0, 00:22:30.611 "fast_io_fail_timeout_sec": 0, 00:22:30.611 "psk": "/tmp/tmp.SPeQoxy81J", 00:22:30.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.611 "hdgst": false, 00:22:30.611 "ddgst": false 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_nvme_set_hotplug", 00:22:30.611 "params": { 00:22:30.611 "period_us": 100000, 00:22:30.611 "enable": false 00:22:30.611 } 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "method": "bdev_wait_for_examine" 00:22:30.611 } 00:22:30.611 ] 00:22:30.611 }, 00:22:30.611 { 00:22:30.611 "subsystem": "nbd", 00:22:30.611 "config": [] 00:22:30.611 } 00:22:30.611 ] 00:22:30.611 }' 00:22:30.611 21:23:24 -- target/tls.sh@199 -- # killprocess 1501076 00:22:30.611 21:23:24 -- common/autotest_common.sh@936 -- # '[' -z 1501076 ']' 00:22:30.611 21:23:24 -- common/autotest_common.sh@940 -- # kill -0 1501076 00:22:30.611 21:23:24 -- common/autotest_common.sh@941 -- # uname 00:22:30.611 21:23:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.611 21:23:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1501076 00:22:30.611 21:23:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:30.611 21:23:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:30.611 21:23:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1501076' 00:22:30.611 killing process with pid 1501076 00:22:30.611 21:23:24 -- common/autotest_common.sh@955 -- # kill 1501076 00:22:30.611 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.611 00:22:30.611 Latency(us) 00:22:30.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.611 =================================================================================================================== 00:22:30.611 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.611 [2024-04-23 21:23:24.733023] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:30.611 21:23:24 -- common/autotest_common.sh@960 -- # wait 1501076 00:22:30.870 21:23:25 -- target/tls.sh@200 -- # killprocess 1500750 00:22:30.870 21:23:25 -- common/autotest_common.sh@936 -- # '[' -z 1500750 ']' 00:22:30.870 21:23:25 -- common/autotest_common.sh@940 -- # kill -0 1500750 00:22:30.870 21:23:25 -- common/autotest_common.sh@941 -- # uname 00:22:30.870 21:23:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.870 21:23:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1500750 00:22:31.130 21:23:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:31.130 21:23:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:31.130 21:23:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1500750' 00:22:31.130 killing process with pid 1500750 00:22:31.130 21:23:25 -- common/autotest_common.sh@955 -- # kill 1500750 00:22:31.130 [2024-04-23 21:23:25.158031] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:31.130 21:23:25 -- common/autotest_common.sh@960 -- # wait 1500750 00:22:31.389 21:23:25 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:31.389 21:23:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:31.389 21:23:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:31.389 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:31.389 21:23:25 -- target/tls.sh@203 -- # echo '{ 00:22:31.389 "subsystems": [ 00:22:31.389 { 00:22:31.389 "subsystem": "keyring", 00:22:31.389 "config": [] 00:22:31.389 }, 00:22:31.389 { 00:22:31.389 "subsystem": "iobuf", 00:22:31.389 "config": [ 00:22:31.389 { 00:22:31.389 "method": "iobuf_set_options", 00:22:31.389 "params": { 00:22:31.389 "small_pool_count": 8192, 00:22:31.389 "large_pool_count": 1024, 00:22:31.389 "small_bufsize": 8192, 00:22:31.389 "large_bufsize": 135168 00:22:31.389 } 00:22:31.389 } 00:22:31.389 ] 00:22:31.389 }, 00:22:31.389 { 00:22:31.389 "subsystem": "sock", 00:22:31.389 "config": [ 00:22:31.389 { 00:22:31.389 "method": "sock_impl_set_options", 00:22:31.389 "params": { 00:22:31.389 "impl_name": "posix", 00:22:31.389 "recv_buf_size": 2097152, 00:22:31.389 "send_buf_size": 2097152, 00:22:31.389 "enable_recv_pipe": true, 00:22:31.389 "enable_quickack": false, 00:22:31.389 "enable_placement_id": 0, 00:22:31.389 "enable_zerocopy_send_server": true, 00:22:31.389 "enable_zerocopy_send_client": false, 00:22:31.389 "zerocopy_threshold": 0, 00:22:31.389 "tls_version": 0, 00:22:31.389 "enable_ktls": false 00:22:31.389 } 00:22:31.389 }, 00:22:31.389 { 00:22:31.389 "method": "sock_impl_set_options", 00:22:31.389 "params": { 00:22:31.389 "impl_name": "ssl", 00:22:31.389 "recv_buf_size": 4096, 00:22:31.389 "send_buf_size": 4096, 00:22:31.389 "enable_recv_pipe": true, 00:22:31.389 "enable_quickack": false, 00:22:31.389 "enable_placement_id": 0, 00:22:31.389 "enable_zerocopy_send_server": true, 00:22:31.389 "enable_zerocopy_send_client": false, 00:22:31.389 "zerocopy_threshold": 0, 00:22:31.389 "tls_version": 0, 00:22:31.389 "enable_ktls": false 00:22:31.389 } 00:22:31.389 } 00:22:31.389 ] 00:22:31.389 }, 00:22:31.389 { 00:22:31.389 "subsystem": "vmd", 00:22:31.389 "config": [] 00:22:31.389 }, 00:22:31.389 { 00:22:31.389 "subsystem": "accel", 00:22:31.389 "config": [ 00:22:31.389 { 00:22:31.389 "method": "accel_set_options", 00:22:31.389 "params": { 00:22:31.389 "small_cache_size": 128, 00:22:31.389 "large_cache_size": 16, 00:22:31.390 "task_count": 2048, 00:22:31.390 "sequence_count": 2048, 00:22:31.390 "buf_count": 2048 00:22:31.390 } 00:22:31.390 } 00:22:31.390 ] 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "subsystem": "bdev", 00:22:31.390 "config": [ 00:22:31.390 { 00:22:31.390 "method": "bdev_set_options", 00:22:31.390 "params": { 00:22:31.390 "bdev_io_pool_size": 65535, 00:22:31.390 "bdev_io_cache_size": 256, 00:22:31.390 "bdev_auto_examine": true, 00:22:31.390 "iobuf_small_cache_size": 128, 00:22:31.390 "iobuf_large_cache_size": 16 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_raid_set_options", 00:22:31.390 "params": { 00:22:31.390 "process_window_size_kb": 1024 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_iscsi_set_options", 00:22:31.390 "params": { 00:22:31.390 "timeout_sec": 30 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_nvme_set_options", 00:22:31.390 "params": { 00:22:31.390 "action_on_timeout": "none", 00:22:31.390 "timeout_us": 0, 00:22:31.390 "timeout_admin_us": 0, 00:22:31.390 "keep_alive_timeout_ms": 10000, 00:22:31.390 "arbitration_burst": 0, 00:22:31.390 "low_priority_weight": 0, 00:22:31.390 "medium_priority_weight": 0, 00:22:31.390 "high_priority_weight": 0, 00:22:31.390 "nvme_adminq_poll_period_us": 10000, 00:22:31.390 "nvme_ioq_poll_period_us": 0, 00:22:31.390 "io_queue_requests": 0, 00:22:31.390 "delay_cmd_submit": true, 00:22:31.390 "transport_retry_count": 4, 00:22:31.390 "bdev_retry_count": 3, 00:22:31.390 "transport_ack_timeout": 0, 00:22:31.390 "ctrlr_loss_timeout_sec": 0, 00:22:31.390 "reconnect_delay_sec": 0, 00:22:31.390 "fast_io_fail_timeout_sec": 0, 00:22:31.390 "disable_auto_failback": false, 00:22:31.390 "generate_uuids": false, 00:22:31.390 "transport_tos": 0, 00:22:31.390 "nvme_error_stat": false, 00:22:31.390 "rdma_srq_size": 0, 00:22:31.390 "io_path_stat": false, 00:22:31.390 "allow_accel_sequence": false, 00:22:31.390 "rdma_max_cq_size": 0, 00:22:31.390 "rdma_cm_event_timeout_ms": 0, 00:22:31.390 "dhchap_digests": [ 00:22:31.390 "sha256", 00:22:31.390 "sha384", 00:22:31.390 "sha512" 00:22:31.390 ], 00:22:31.390 "dhchap_dhgroups": [ 00:22:31.390 "null", 00:22:31.390 "ffdhe2048", 00:22:31.390 "ffdhe3072", 00:22:31.390 "ffdhe4096", 00:22:31.390 "ffdhe6144", 00:22:31.390 "ffdhe8192" 00:22:31.390 ] 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_nvme_set_hotplug", 00:22:31.390 "params": { 00:22:31.390 "period_us": 100000, 00:22:31.390 "enable": false 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_malloc_create", 00:22:31.390 "params": { 00:22:31.390 "name": "malloc0", 00:22:31.390 "num_blocks": 8192, 00:22:31.390 "block_size": 4096, 00:22:31.390 "physical_block_size": 4096, 00:22:31.390 "uuid": "88fcf1a6-2f2d-44a1-8dbc-de3eac8d50fb", 00:22:31.390 "optimal_io_boundary": 0 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "bdev_wait_for_examine" 00:22:31.390 } 00:22:31.390 ] 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "subsystem": "nbd", 00:22:31.390 "config": [] 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "subsystem": "scheduler", 00:22:31.390 "config": [ 00:22:31.390 { 00:22:31.390 "method": "framework_set_scheduler", 00:22:31.390 "params": { 00:22:31.390 "name": "static" 00:22:31.390 } 00:22:31.390 } 00:22:31.390 ] 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "subsystem": "nvmf", 00:22:31.390 "config": [ 00:22:31.390 { 00:22:31.390 "method": "nvmf_set_config", 00:22:31.390 "params": { 00:22:31.390 "discovery_filter": "match_any", 00:22:31.390 "admin_cmd_passthru": { 00:22:31.390 "identify_ctrlr": false 00:22:31.390 } 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_set_max_subsystems", 00:22:31.390 "params": { 00:22:31.390 "max_subsystems": 1024 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_set_crdt", 00:22:31.390 "params": { 00:22:31.390 "crdt1": 0, 00:22:31.390 "crdt2": 0, 00:22:31.390 "crdt3": 0 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_create_transport", 00:22:31.390 "params": { 00:22:31.390 "trtype": "TCP", 00:22:31.390 "max_queue_depth": 128, 00:22:31.390 "max_io_qpairs_per_ctrlr": 127, 00:22:31.390 "in_capsule_data_size": 4096, 00:22:31.390 "max_io_size": 131072, 00:22:31.390 "io_unit_size": 131072, 00:22:31.390 "max_aq_depth": 128, 00:22:31.390 "num_shared_buffers": 511, 00:22:31.390 "buf_cache_size": 4294967295, 00:22:31.390 "dif_insert_or_strip": false, 00:22:31.390 "zcopy": false, 00:22:31.390 "c2h_success": false, 00:22:31.390 "sock_priority": 0, 00:22:31.390 "abort_timeout_sec": 1, 00:22:31.390 "ack_timeout": 0, 00:22:31.390 "data_wr_pool_size": 0 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_create_subsystem", 00:22:31.390 "params": { 00:22:31.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.390 "allow_any_host": false, 00:22:31.390 "serial_number": "SPDK00000000000001", 00:22:31.390 "model_number": "SPDK bdev Controller", 00:22:31.390 "max_namespaces": 10, 00:22:31.390 "min_cntlid": 1, 00:22:31.390 "max_cntlid": 65519, 00:22:31.390 "ana_reporting": false 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_subsystem_add_host", 00:22:31.390 "params": { 00:22:31.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.390 "host": "nqn.2016-06.io.spdk:host1", 00:22:31.390 "psk": "/tmp/tmp.SPeQoxy81J" 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_subsystem_add_ns", 00:22:31.390 "params": { 00:22:31.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.390 "namespace": { 00:22:31.390 "nsid": 1, 00:22:31.390 "bdev_name": "malloc0", 00:22:31.390 "nguid": "88FCF1A62F2D44A18DBCDE3EAC8D50FB", 00:22:31.390 "uuid": "88fcf1a6-2f2d-44a1-8dbc-de3eac8d50fb", 00:22:31.390 "no_auto_visible": false 00:22:31.390 } 00:22:31.390 } 00:22:31.390 }, 00:22:31.390 { 00:22:31.390 "method": "nvmf_subsystem_add_listener", 00:22:31.390 "params": { 00:22:31.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.390 "listen_address": { 00:22:31.390 "trtype": "TCP", 00:22:31.390 "adrfam": "IPv4", 00:22:31.390 "traddr": "10.0.0.2", 00:22:31.390 "trsvcid": "4420" 00:22:31.390 }, 00:22:31.390 "secure_channel": true 00:22:31.390 } 00:22:31.390 } 00:22:31.390 ] 00:22:31.390 } 00:22:31.390 ] 00:22:31.390 }' 00:22:31.390 21:23:25 -- nvmf/common.sh@470 -- # nvmfpid=1501423 00:22:31.390 21:23:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:31.390 21:23:25 -- nvmf/common.sh@471 -- # waitforlisten 1501423 00:22:31.390 21:23:25 -- common/autotest_common.sh@817 -- # '[' -z 1501423 ']' 00:22:31.390 21:23:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.390 21:23:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.390 21:23:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.390 21:23:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.390 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:22:31.651 [2024-04-23 21:23:25.702024] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:31.651 [2024-04-23 21:23:25.702109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.651 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.651 [2024-04-23 21:23:25.796397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.651 [2024-04-23 21:23:25.894409] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.651 [2024-04-23 21:23:25.894447] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.651 [2024-04-23 21:23:25.894457] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.651 [2024-04-23 21:23:25.894467] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.651 [2024-04-23 21:23:25.894475] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.651 [2024-04-23 21:23:25.894563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.222 [2024-04-23 21:23:26.199840] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.222 [2024-04-23 21:23:26.215785] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:32.223 [2024-04-23 21:23:26.231798] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.223 [2024-04-23 21:23:26.232045] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.223 21:23:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:32.223 21:23:26 -- common/autotest_common.sh@850 -- # return 0 00:22:32.223 21:23:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:32.223 21:23:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:32.223 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:32.223 21:23:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.223 21:23:26 -- target/tls.sh@207 -- # bdevperf_pid=1501706 00:22:32.223 21:23:26 -- target/tls.sh@208 -- # waitforlisten 1501706 /var/tmp/bdevperf.sock 00:22:32.223 21:23:26 -- common/autotest_common.sh@817 -- # '[' -z 1501706 ']' 00:22:32.223 21:23:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.223 21:23:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:32.223 21:23:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.223 21:23:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:32.223 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:22:32.223 21:23:26 -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:32.223 21:23:26 -- target/tls.sh@204 -- # echo '{ 00:22:32.223 "subsystems": [ 00:22:32.223 { 00:22:32.223 "subsystem": "keyring", 00:22:32.223 "config": [] 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "subsystem": "iobuf", 00:22:32.223 "config": [ 00:22:32.223 { 00:22:32.223 "method": "iobuf_set_options", 00:22:32.223 "params": { 00:22:32.223 "small_pool_count": 8192, 00:22:32.223 "large_pool_count": 1024, 00:22:32.223 "small_bufsize": 8192, 00:22:32.223 "large_bufsize": 135168 00:22:32.223 } 00:22:32.223 } 00:22:32.223 ] 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "subsystem": "sock", 00:22:32.223 "config": [ 00:22:32.223 { 00:22:32.223 "method": "sock_impl_set_options", 00:22:32.223 "params": { 00:22:32.223 "impl_name": "posix", 00:22:32.223 "recv_buf_size": 2097152, 00:22:32.223 "send_buf_size": 2097152, 00:22:32.223 "enable_recv_pipe": true, 00:22:32.223 "enable_quickack": false, 00:22:32.223 "enable_placement_id": 0, 00:22:32.223 "enable_zerocopy_send_server": true, 00:22:32.223 "enable_zerocopy_send_client": false, 00:22:32.223 "zerocopy_threshold": 0, 00:22:32.223 "tls_version": 0, 00:22:32.223 "enable_ktls": false 00:22:32.223 } 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "method": "sock_impl_set_options", 00:22:32.223 "params": { 00:22:32.223 "impl_name": "ssl", 00:22:32.223 "recv_buf_size": 4096, 00:22:32.223 "send_buf_size": 4096, 00:22:32.223 "enable_recv_pipe": true, 00:22:32.223 "enable_quickack": false, 00:22:32.223 "enable_placement_id": 0, 00:22:32.223 "enable_zerocopy_send_server": true, 00:22:32.223 "enable_zerocopy_send_client": false, 00:22:32.223 "zerocopy_threshold": 0, 00:22:32.223 "tls_version": 0, 00:22:32.223 "enable_ktls": false 00:22:32.223 } 00:22:32.223 } 00:22:32.223 ] 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "subsystem": "vmd", 00:22:32.223 "config": [] 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "subsystem": "accel", 00:22:32.223 "config": [ 00:22:32.223 { 00:22:32.223 "method": "accel_set_options", 00:22:32.223 "params": { 00:22:32.223 "small_cache_size": 128, 00:22:32.223 "large_cache_size": 16, 00:22:32.223 "task_count": 2048, 00:22:32.223 "sequence_count": 2048, 00:22:32.223 "buf_count": 2048 00:22:32.223 } 00:22:32.223 } 00:22:32.223 ] 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "subsystem": "bdev", 00:22:32.223 "config": [ 00:22:32.223 { 00:22:32.223 "method": "bdev_set_options", 00:22:32.223 "params": { 00:22:32.223 "bdev_io_pool_size": 65535, 00:22:32.223 "bdev_io_cache_size": 256, 00:22:32.223 "bdev_auto_examine": true, 00:22:32.223 "iobuf_small_cache_size": 128, 00:22:32.223 "iobuf_large_cache_size": 16 00:22:32.223 } 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "method": "bdev_raid_set_options", 00:22:32.223 "params": { 00:22:32.223 "process_window_size_kb": 1024 00:22:32.223 } 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "method": "bdev_iscsi_set_options", 00:22:32.223 "params": { 00:22:32.223 "timeout_sec": 30 00:22:32.223 } 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "method": "bdev_nvme_set_options", 00:22:32.223 "params": { 00:22:32.223 "action_on_timeout": "none", 00:22:32.223 "timeout_us": 0, 00:22:32.223 "timeout_admin_us": 0, 00:22:32.223 "keep_alive_timeout_ms": 10000, 00:22:32.223 "arbitration_burst": 0, 00:22:32.223 "low_priority_weight": 0, 00:22:32.223 "medium_priority_weight": 0, 00:22:32.223 "high_priority_weight": 0, 00:22:32.223 "nvme_adminq_poll_period_us": 10000, 00:22:32.223 "nvme_ioq_poll_period_us": 0, 00:22:32.223 "io_queue_requests": 512, 00:22:32.223 "delay_cmd_submit": true, 00:22:32.223 "transport_retry_count": 4, 00:22:32.223 "bdev_retry_count": 3, 00:22:32.223 "transport_ack_timeout": 0, 00:22:32.223 "ctrlr_loss_timeout_sec": 0, 00:22:32.223 "reconnect_delay_sec": 0, 00:22:32.223 "fast_io_fail_timeout_sec": 0, 00:22:32.223 "disable_auto_failback": false, 00:22:32.223 "generate_uuids": false, 00:22:32.223 "transport_tos": 0, 00:22:32.223 "nvme_error_stat": false, 00:22:32.223 "rdma_srq_size": 0, 00:22:32.223 "io_path_stat": false, 00:22:32.223 "allow_accel_sequence": false, 00:22:32.223 "rdma_max_cq_size": 0, 00:22:32.223 "rdma_cm_event_timeout_ms": 0, 00:22:32.223 "dhchap_digests": [ 00:22:32.223 "sha256", 00:22:32.223 "sha384", 00:22:32.223 "sha512" 00:22:32.223 ], 00:22:32.223 "dhchap_dhgroups": [ 00:22:32.223 "null", 00:22:32.223 "ffdhe2048", 00:22:32.223 "ffdhe3072", 00:22:32.223 "ffdhe4096", 00:22:32.223 "ffdhe6144", 00:22:32.223 "ffdhe8192" 00:22:32.223 ] 00:22:32.223 } 00:22:32.223 }, 00:22:32.223 { 00:22:32.223 "method": "bdev_nvme_attach_controller", 00:22:32.223 "params": { 00:22:32.223 "name": "TLSTEST", 00:22:32.223 "trtype": "TCP", 00:22:32.223 "adrfam": "IPv4", 00:22:32.223 "traddr": "10.0.0.2", 00:22:32.223 "trsvcid": "4420", 00:22:32.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.223 "prchk_reftag": false, 00:22:32.223 "prchk_guard": false, 00:22:32.223 "ctrlr_loss_timeout_sec": 0, 00:22:32.223 "reconnect_delay_sec": 0, 00:22:32.224 "fast_io_fail_timeout_sec": 0, 00:22:32.224 "psk": "/tmp/tmp.SPeQoxy81J", 00:22:32.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.224 "hdgst": false, 00:22:32.224 "ddgst": false 00:22:32.224 } 00:22:32.224 }, 00:22:32.224 { 00:22:32.224 "method": "bdev_nvme_set_hotplug", 00:22:32.224 "params": { 00:22:32.224 "period_us": 100000, 00:22:32.224 "enable": false 00:22:32.224 } 00:22:32.224 }, 00:22:32.224 { 00:22:32.224 "method": "bdev_wait_for_examine" 00:22:32.224 } 00:22:32.224 ] 00:22:32.224 }, 00:22:32.224 { 00:22:32.224 "subsystem": "nbd", 00:22:32.224 "config": [] 00:22:32.224 } 00:22:32.224 ] 00:22:32.224 }' 00:22:32.483 [2024-04-23 21:23:26.504391] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:32.483 [2024-04-23 21:23:26.504506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501706 ] 00:22:32.483 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.483 [2024-04-23 21:23:26.616737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.483 [2024-04-23 21:23:26.710460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.742 [2024-04-23 21:23:26.920708] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.742 [2024-04-23 21:23:26.920805] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:33.002 21:23:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:33.002 21:23:27 -- common/autotest_common.sh@850 -- # return 0 00:22:33.002 21:23:27 -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:33.262 Running I/O for 10 seconds... 00:22:43.262 00:22:43.262 Latency(us) 00:22:43.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.262 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.262 Verification LBA range: start 0x0 length 0x2000 00:22:43.262 TLSTESTn1 : 10.03 3467.71 13.55 0.00 0.00 36840.24 5242.88 105409.48 00:22:43.262 =================================================================================================================== 00:22:43.262 Total : 3467.71 13.55 0.00 0.00 36840.24 5242.88 105409.48 00:22:43.262 0 00:22:43.262 21:23:37 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.262 21:23:37 -- target/tls.sh@214 -- # killprocess 1501706 00:22:43.262 21:23:37 -- common/autotest_common.sh@936 -- # '[' -z 1501706 ']' 00:22:43.262 21:23:37 -- common/autotest_common.sh@940 -- # kill -0 1501706 00:22:43.262 21:23:37 -- common/autotest_common.sh@941 -- # uname 00:22:43.262 21:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.262 21:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1501706 00:22:43.262 21:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:43.262 21:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:43.262 21:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1501706' 00:22:43.262 killing process with pid 1501706 00:22:43.262 21:23:37 -- common/autotest_common.sh@955 -- # kill 1501706 00:22:43.262 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.262 00:22:43.262 Latency(us) 00:22:43.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.262 =================================================================================================================== 00:22:43.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.262 [2024-04-23 21:23:37.381327] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:43.262 21:23:37 -- common/autotest_common.sh@960 -- # wait 1501706 00:22:43.523 21:23:37 -- target/tls.sh@215 -- # killprocess 1501423 00:22:43.523 21:23:37 -- common/autotest_common.sh@936 -- # '[' -z 1501423 ']' 00:22:43.523 21:23:37 -- common/autotest_common.sh@940 -- # kill -0 1501423 00:22:43.523 21:23:37 -- common/autotest_common.sh@941 -- # uname 00:22:43.523 21:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.523 21:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1501423 00:22:43.782 21:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.782 21:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.782 21:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1501423' 00:22:43.782 killing process with pid 1501423 00:22:43.782 21:23:37 -- common/autotest_common.sh@955 -- # kill 1501423 00:22:43.782 [2024-04-23 21:23:37.799135] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.782 21:23:37 -- common/autotest_common.sh@960 -- # wait 1501423 00:22:44.349 21:23:38 -- target/tls.sh@218 -- # nvmfappstart 00:22:44.349 21:23:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:44.349 21:23:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:44.349 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:22:44.349 21:23:38 -- nvmf/common.sh@470 -- # nvmfpid=1503938 00:22:44.349 21:23:38 -- nvmf/common.sh@471 -- # waitforlisten 1503938 00:22:44.349 21:23:38 -- common/autotest_common.sh@817 -- # '[' -z 1503938 ']' 00:22:44.349 21:23:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.349 21:23:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:44.349 21:23:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.349 21:23:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:44.349 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:22:44.349 21:23:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:44.349 [2024-04-23 21:23:38.404803] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:44.349 [2024-04-23 21:23:38.404911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.349 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.349 [2024-04-23 21:23:38.529428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.349 [2024-04-23 21:23:38.619803] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.349 [2024-04-23 21:23:38.619846] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.349 [2024-04-23 21:23:38.619855] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.349 [2024-04-23 21:23:38.619864] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.349 [2024-04-23 21:23:38.619871] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.349 [2024-04-23 21:23:38.619895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.917 21:23:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:44.917 21:23:39 -- common/autotest_common.sh@850 -- # return 0 00:22:44.917 21:23:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:44.917 21:23:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:44.917 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:22:44.917 21:23:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.917 21:23:39 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.SPeQoxy81J 00:22:44.917 21:23:39 -- target/tls.sh@49 -- # local key=/tmp/tmp.SPeQoxy81J 00:22:44.917 21:23:39 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:45.178 [2024-04-23 21:23:39.271031] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.178 21:23:39 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.178 21:23:39 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.439 [2024-04-23 21:23:39.571083] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.439 [2024-04-23 21:23:39.571361] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.439 21:23:39 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:45.698 malloc0 00:22:45.698 21:23:39 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:45.698 21:23:39 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SPeQoxy81J 00:22:45.959 [2024-04-23 21:23:40.072304] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.959 21:23:40 -- target/tls.sh@222 -- # bdevperf_pid=1504384 00:22:45.959 21:23:40 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.959 21:23:40 -- target/tls.sh@225 -- # waitforlisten 1504384 /var/tmp/bdevperf.sock 00:22:45.959 21:23:40 -- common/autotest_common.sh@817 -- # '[' -z 1504384 ']' 00:22:45.959 21:23:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.959 21:23:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.959 21:23:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.959 21:23:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.959 21:23:40 -- common/autotest_common.sh@10 -- # set +x 00:22:45.959 21:23:40 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:45.959 [2024-04-23 21:23:40.168738] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:45.960 [2024-04-23 21:23:40.168874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504384 ] 00:22:46.245 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.245 [2024-04-23 21:23:40.301930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.245 [2024-04-23 21:23:40.397715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.817 21:23:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.817 21:23:40 -- common/autotest_common.sh@850 -- # return 0 00:22:46.817 21:23:40 -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SPeQoxy81J 00:22:46.817 21:23:41 -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:47.076 [2024-04-23 21:23:41.167797] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.076 nvme0n1 00:22:47.076 21:23:41 -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.076 Running I/O for 1 seconds... 00:22:48.461 00:22:48.461 Latency(us) 00:22:48.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.461 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.461 Verification LBA range: start 0x0 length 0x2000 00:22:48.461 nvme0n1 : 1.06 1631.30 6.37 0.00 0.00 76618.02 5277.37 100442.54 00:22:48.461 =================================================================================================================== 00:22:48.461 Total : 1631.30 6.37 0.00 0.00 76618.02 5277.37 100442.54 00:22:48.461 0 00:22:48.461 21:23:42 -- target/tls.sh@234 -- # killprocess 1504384 00:22:48.461 21:23:42 -- common/autotest_common.sh@936 -- # '[' -z 1504384 ']' 00:22:48.461 21:23:42 -- common/autotest_common.sh@940 -- # kill -0 1504384 00:22:48.461 21:23:42 -- common/autotest_common.sh@941 -- # uname 00:22:48.461 21:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.461 21:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1504384 00:22:48.461 21:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.462 21:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.462 21:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1504384' 00:22:48.462 killing process with pid 1504384 00:22:48.462 21:23:42 -- common/autotest_common.sh@955 -- # kill 1504384 00:22:48.462 Received shutdown signal, test time was about 1.000000 seconds 00:22:48.462 00:22:48.462 Latency(us) 00:22:48.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.462 =================================================================================================================== 00:22:48.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.462 21:23:42 -- common/autotest_common.sh@960 -- # wait 1504384 00:22:48.720 21:23:42 -- target/tls.sh@235 -- # killprocess 1503938 00:22:48.720 21:23:42 -- common/autotest_common.sh@936 -- # '[' -z 1503938 ']' 00:22:48.720 21:23:42 -- common/autotest_common.sh@940 -- # kill -0 1503938 00:22:48.720 21:23:42 -- common/autotest_common.sh@941 -- # uname 00:22:48.720 21:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.720 21:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1503938 00:22:48.720 21:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:48.720 21:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:48.720 21:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1503938' 00:22:48.720 killing process with pid 1503938 00:22:48.720 21:23:42 -- common/autotest_common.sh@955 -- # kill 1503938 00:22:48.720 [2024-04-23 21:23:42.878138] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:48.720 21:23:42 -- common/autotest_common.sh@960 -- # wait 1503938 00:22:49.290 21:23:43 -- target/tls.sh@238 -- # nvmfappstart 00:22:49.290 21:23:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:49.290 21:23:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:49.290 21:23:43 -- common/autotest_common.sh@10 -- # set +x 00:22:49.290 21:23:43 -- nvmf/common.sh@470 -- # nvmfpid=1504999 00:22:49.290 21:23:43 -- nvmf/common.sh@471 -- # waitforlisten 1504999 00:22:49.290 21:23:43 -- common/autotest_common.sh@817 -- # '[' -z 1504999 ']' 00:22:49.290 21:23:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.290 21:23:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:49.290 21:23:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.290 21:23:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:49.291 21:23:43 -- common/autotest_common.sh@10 -- # set +x 00:22:49.291 21:23:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:49.291 [2024-04-23 21:23:43.464806] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:49.291 [2024-04-23 21:23:43.464927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.291 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.572 [2024-04-23 21:23:43.589535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.572 [2024-04-23 21:23:43.680696] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.572 [2024-04-23 21:23:43.680732] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.572 [2024-04-23 21:23:43.680741] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.572 [2024-04-23 21:23:43.680751] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.572 [2024-04-23 21:23:43.680758] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.572 [2024-04-23 21:23:43.680782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.930 21:23:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:49.930 21:23:44 -- common/autotest_common.sh@850 -- # return 0 00:22:49.930 21:23:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:49.930 21:23:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:49.930 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.930 21:23:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.930 21:23:44 -- target/tls.sh@239 -- # rpc_cmd 00:22:49.930 21:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.930 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.930 [2024-04-23 21:23:44.185284] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.189 malloc0 00:22:50.189 [2024-04-23 21:23:44.234200] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.189 [2024-04-23 21:23:44.234424] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.189 21:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.189 21:23:44 -- target/tls.sh@252 -- # bdevperf_pid=1505045 00:22:50.189 21:23:44 -- target/tls.sh@254 -- # waitforlisten 1505045 /var/tmp/bdevperf.sock 00:22:50.189 21:23:44 -- common/autotest_common.sh@817 -- # '[' -z 1505045 ']' 00:22:50.189 21:23:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.189 21:23:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:50.189 21:23:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.189 21:23:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:50.189 21:23:44 -- common/autotest_common.sh@10 -- # set +x 00:22:50.189 21:23:44 -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:50.189 [2024-04-23 21:23:44.331973] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:50.189 [2024-04-23 21:23:44.332079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505045 ] 00:22:50.189 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.189 [2024-04-23 21:23:44.445192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.447 [2024-04-23 21:23:44.539513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.016 21:23:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:51.016 21:23:45 -- common/autotest_common.sh@850 -- # return 0 00:22:51.016 21:23:45 -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SPeQoxy81J 00:22:51.016 21:23:45 -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:51.277 [2024-04-23 21:23:45.313609] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.277 nvme0n1 00:22:51.277 21:23:45 -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.277 Running I/O for 1 seconds... 00:22:52.658 00:22:52.658 Latency(us) 00:22:52.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.658 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:52.658 Verification LBA range: start 0x0 length 0x2000 00:22:52.658 nvme0n1 : 1.03 2753.15 10.75 0.00 0.00 45824.17 6657.08 107065.13 00:22:52.658 =================================================================================================================== 00:22:52.658 Total : 2753.15 10.75 0.00 0.00 45824.17 6657.08 107065.13 00:22:52.658 0 00:22:52.658 21:23:46 -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:52.658 21:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.658 21:23:46 -- common/autotest_common.sh@10 -- # set +x 00:22:52.658 21:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.658 21:23:46 -- target/tls.sh@263 -- # tgtcfg='{ 00:22:52.658 "subsystems": [ 00:22:52.658 { 00:22:52.658 "subsystem": "keyring", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "keyring_file_add_key", 00:22:52.658 "params": { 00:22:52.658 "name": "key0", 00:22:52.658 "path": "/tmp/tmp.SPeQoxy81J" 00:22:52.658 } 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "iobuf", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "iobuf_set_options", 00:22:52.658 "params": { 00:22:52.658 "small_pool_count": 8192, 00:22:52.658 "large_pool_count": 1024, 00:22:52.658 "small_bufsize": 8192, 00:22:52.658 "large_bufsize": 135168 00:22:52.658 } 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "sock", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "sock_impl_set_options", 00:22:52.658 "params": { 00:22:52.658 "impl_name": "posix", 00:22:52.658 "recv_buf_size": 2097152, 00:22:52.658 "send_buf_size": 2097152, 00:22:52.658 "enable_recv_pipe": true, 00:22:52.658 "enable_quickack": false, 00:22:52.658 "enable_placement_id": 0, 00:22:52.658 "enable_zerocopy_send_server": true, 00:22:52.658 "enable_zerocopy_send_client": false, 00:22:52.658 "zerocopy_threshold": 0, 00:22:52.658 "tls_version": 0, 00:22:52.658 "enable_ktls": false 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "sock_impl_set_options", 00:22:52.658 "params": { 00:22:52.658 "impl_name": "ssl", 00:22:52.658 "recv_buf_size": 4096, 00:22:52.658 "send_buf_size": 4096, 00:22:52.658 "enable_recv_pipe": true, 00:22:52.658 "enable_quickack": false, 00:22:52.658 "enable_placement_id": 0, 00:22:52.658 "enable_zerocopy_send_server": true, 00:22:52.658 "enable_zerocopy_send_client": false, 00:22:52.658 "zerocopy_threshold": 0, 00:22:52.658 "tls_version": 0, 00:22:52.658 "enable_ktls": false 00:22:52.658 } 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "vmd", 00:22:52.658 "config": [] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "accel", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "accel_set_options", 00:22:52.658 "params": { 00:22:52.658 "small_cache_size": 128, 00:22:52.658 "large_cache_size": 16, 00:22:52.658 "task_count": 2048, 00:22:52.658 "sequence_count": 2048, 00:22:52.658 "buf_count": 2048 00:22:52.658 } 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "bdev", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "bdev_set_options", 00:22:52.658 "params": { 00:22:52.658 "bdev_io_pool_size": 65535, 00:22:52.658 "bdev_io_cache_size": 256, 00:22:52.658 "bdev_auto_examine": true, 00:22:52.658 "iobuf_small_cache_size": 128, 00:22:52.658 "iobuf_large_cache_size": 16 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_raid_set_options", 00:22:52.658 "params": { 00:22:52.658 "process_window_size_kb": 1024 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_iscsi_set_options", 00:22:52.658 "params": { 00:22:52.658 "timeout_sec": 30 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_nvme_set_options", 00:22:52.658 "params": { 00:22:52.658 "action_on_timeout": "none", 00:22:52.658 "timeout_us": 0, 00:22:52.658 "timeout_admin_us": 0, 00:22:52.658 "keep_alive_timeout_ms": 10000, 00:22:52.658 "arbitration_burst": 0, 00:22:52.658 "low_priority_weight": 0, 00:22:52.658 "medium_priority_weight": 0, 00:22:52.658 "high_priority_weight": 0, 00:22:52.658 "nvme_adminq_poll_period_us": 10000, 00:22:52.658 "nvme_ioq_poll_period_us": 0, 00:22:52.658 "io_queue_requests": 0, 00:22:52.658 "delay_cmd_submit": true, 00:22:52.658 "transport_retry_count": 4, 00:22:52.658 "bdev_retry_count": 3, 00:22:52.658 "transport_ack_timeout": 0, 00:22:52.658 "ctrlr_loss_timeout_sec": 0, 00:22:52.658 "reconnect_delay_sec": 0, 00:22:52.658 "fast_io_fail_timeout_sec": 0, 00:22:52.658 "disable_auto_failback": false, 00:22:52.658 "generate_uuids": false, 00:22:52.658 "transport_tos": 0, 00:22:52.658 "nvme_error_stat": false, 00:22:52.658 "rdma_srq_size": 0, 00:22:52.658 "io_path_stat": false, 00:22:52.658 "allow_accel_sequence": false, 00:22:52.658 "rdma_max_cq_size": 0, 00:22:52.658 "rdma_cm_event_timeout_ms": 0, 00:22:52.658 "dhchap_digests": [ 00:22:52.658 "sha256", 00:22:52.658 "sha384", 00:22:52.658 "sha512" 00:22:52.658 ], 00:22:52.658 "dhchap_dhgroups": [ 00:22:52.658 "null", 00:22:52.658 "ffdhe2048", 00:22:52.658 "ffdhe3072", 00:22:52.658 "ffdhe4096", 00:22:52.658 "ffdhe6144", 00:22:52.658 "ffdhe8192" 00:22:52.658 ] 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_nvme_set_hotplug", 00:22:52.658 "params": { 00:22:52.658 "period_us": 100000, 00:22:52.658 "enable": false 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_malloc_create", 00:22:52.658 "params": { 00:22:52.658 "name": "malloc0", 00:22:52.658 "num_blocks": 8192, 00:22:52.658 "block_size": 4096, 00:22:52.658 "physical_block_size": 4096, 00:22:52.658 "uuid": "fb4fab8f-4db1-405e-a258-8254275a9c30", 00:22:52.658 "optimal_io_boundary": 0 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "bdev_wait_for_examine" 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "nbd", 00:22:52.658 "config": [] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "scheduler", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "framework_set_scheduler", 00:22:52.658 "params": { 00:22:52.658 "name": "static" 00:22:52.658 } 00:22:52.658 } 00:22:52.658 ] 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "subsystem": "nvmf", 00:22:52.658 "config": [ 00:22:52.658 { 00:22:52.658 "method": "nvmf_set_config", 00:22:52.658 "params": { 00:22:52.658 "discovery_filter": "match_any", 00:22:52.658 "admin_cmd_passthru": { 00:22:52.658 "identify_ctrlr": false 00:22:52.658 } 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "nvmf_set_max_subsystems", 00:22:52.658 "params": { 00:22:52.658 "max_subsystems": 1024 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "nvmf_set_crdt", 00:22:52.658 "params": { 00:22:52.658 "crdt1": 0, 00:22:52.658 "crdt2": 0, 00:22:52.658 "crdt3": 0 00:22:52.658 } 00:22:52.658 }, 00:22:52.658 { 00:22:52.658 "method": "nvmf_create_transport", 00:22:52.658 "params": { 00:22:52.658 "trtype": "TCP", 00:22:52.658 "max_queue_depth": 128, 00:22:52.658 "max_io_qpairs_per_ctrlr": 127, 00:22:52.658 "in_capsule_data_size": 4096, 00:22:52.658 "max_io_size": 131072, 00:22:52.658 "io_unit_size": 131072, 00:22:52.658 "max_aq_depth": 128, 00:22:52.658 "num_shared_buffers": 511, 00:22:52.658 "buf_cache_size": 4294967295, 00:22:52.658 "dif_insert_or_strip": false, 00:22:52.658 "zcopy": false, 00:22:52.658 "c2h_success": false, 00:22:52.658 "sock_priority": 0, 00:22:52.659 "abort_timeout_sec": 1, 00:22:52.659 "ack_timeout": 0, 00:22:52.659 "data_wr_pool_size": 0 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "nvmf_create_subsystem", 00:22:52.659 "params": { 00:22:52.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.659 "allow_any_host": false, 00:22:52.659 "serial_number": "00000000000000000000", 00:22:52.659 "model_number": "SPDK bdev Controller", 00:22:52.659 "max_namespaces": 32, 00:22:52.659 "min_cntlid": 1, 00:22:52.659 "max_cntlid": 65519, 00:22:52.659 "ana_reporting": false 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "nvmf_subsystem_add_host", 00:22:52.659 "params": { 00:22:52.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.659 "host": "nqn.2016-06.io.spdk:host1", 00:22:52.659 "psk": "key0" 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "nvmf_subsystem_add_ns", 00:22:52.659 "params": { 00:22:52.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.659 "namespace": { 00:22:52.659 "nsid": 1, 00:22:52.659 "bdev_name": "malloc0", 00:22:52.659 "nguid": "FB4FAB8F4DB1405EA2588254275A9C30", 00:22:52.659 "uuid": "fb4fab8f-4db1-405e-a258-8254275a9c30", 00:22:52.659 "no_auto_visible": false 00:22:52.659 } 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "nvmf_subsystem_add_listener", 00:22:52.659 "params": { 00:22:52.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.659 "listen_address": { 00:22:52.659 "trtype": "TCP", 00:22:52.659 "adrfam": "IPv4", 00:22:52.659 "traddr": "10.0.0.2", 00:22:52.659 "trsvcid": "4420" 00:22:52.659 }, 00:22:52.659 "secure_channel": true 00:22:52.659 } 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }' 00:22:52.659 21:23:46 -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:52.659 21:23:46 -- target/tls.sh@264 -- # bperfcfg='{ 00:22:52.659 "subsystems": [ 00:22:52.659 { 00:22:52.659 "subsystem": "keyring", 00:22:52.659 "config": [ 00:22:52.659 { 00:22:52.659 "method": "keyring_file_add_key", 00:22:52.659 "params": { 00:22:52.659 "name": "key0", 00:22:52.659 "path": "/tmp/tmp.SPeQoxy81J" 00:22:52.659 } 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "iobuf", 00:22:52.659 "config": [ 00:22:52.659 { 00:22:52.659 "method": "iobuf_set_options", 00:22:52.659 "params": { 00:22:52.659 "small_pool_count": 8192, 00:22:52.659 "large_pool_count": 1024, 00:22:52.659 "small_bufsize": 8192, 00:22:52.659 "large_bufsize": 135168 00:22:52.659 } 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "sock", 00:22:52.659 "config": [ 00:22:52.659 { 00:22:52.659 "method": "sock_impl_set_options", 00:22:52.659 "params": { 00:22:52.659 "impl_name": "posix", 00:22:52.659 "recv_buf_size": 2097152, 00:22:52.659 "send_buf_size": 2097152, 00:22:52.659 "enable_recv_pipe": true, 00:22:52.659 "enable_quickack": false, 00:22:52.659 "enable_placement_id": 0, 00:22:52.659 "enable_zerocopy_send_server": true, 00:22:52.659 "enable_zerocopy_send_client": false, 00:22:52.659 "zerocopy_threshold": 0, 00:22:52.659 "tls_version": 0, 00:22:52.659 "enable_ktls": false 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "sock_impl_set_options", 00:22:52.659 "params": { 00:22:52.659 "impl_name": "ssl", 00:22:52.659 "recv_buf_size": 4096, 00:22:52.659 "send_buf_size": 4096, 00:22:52.659 "enable_recv_pipe": true, 00:22:52.659 "enable_quickack": false, 00:22:52.659 "enable_placement_id": 0, 00:22:52.659 "enable_zerocopy_send_server": true, 00:22:52.659 "enable_zerocopy_send_client": false, 00:22:52.659 "zerocopy_threshold": 0, 00:22:52.659 "tls_version": 0, 00:22:52.659 "enable_ktls": false 00:22:52.659 } 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "vmd", 00:22:52.659 "config": [] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "accel", 00:22:52.659 "config": [ 00:22:52.659 { 00:22:52.659 "method": "accel_set_options", 00:22:52.659 "params": { 00:22:52.659 "small_cache_size": 128, 00:22:52.659 "large_cache_size": 16, 00:22:52.659 "task_count": 2048, 00:22:52.659 "sequence_count": 2048, 00:22:52.659 "buf_count": 2048 00:22:52.659 } 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "bdev", 00:22:52.659 "config": [ 00:22:52.659 { 00:22:52.659 "method": "bdev_set_options", 00:22:52.659 "params": { 00:22:52.659 "bdev_io_pool_size": 65535, 00:22:52.659 "bdev_io_cache_size": 256, 00:22:52.659 "bdev_auto_examine": true, 00:22:52.659 "iobuf_small_cache_size": 128, 00:22:52.659 "iobuf_large_cache_size": 16 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_raid_set_options", 00:22:52.659 "params": { 00:22:52.659 "process_window_size_kb": 1024 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_iscsi_set_options", 00:22:52.659 "params": { 00:22:52.659 "timeout_sec": 30 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_nvme_set_options", 00:22:52.659 "params": { 00:22:52.659 "action_on_timeout": "none", 00:22:52.659 "timeout_us": 0, 00:22:52.659 "timeout_admin_us": 0, 00:22:52.659 "keep_alive_timeout_ms": 10000, 00:22:52.659 "arbitration_burst": 0, 00:22:52.659 "low_priority_weight": 0, 00:22:52.659 "medium_priority_weight": 0, 00:22:52.659 "high_priority_weight": 0, 00:22:52.659 "nvme_adminq_poll_period_us": 10000, 00:22:52.659 "nvme_ioq_poll_period_us": 0, 00:22:52.659 "io_queue_requests": 512, 00:22:52.659 "delay_cmd_submit": true, 00:22:52.659 "transport_retry_count": 4, 00:22:52.659 "bdev_retry_count": 3, 00:22:52.659 "transport_ack_timeout": 0, 00:22:52.659 "ctrlr_loss_timeout_sec": 0, 00:22:52.659 "reconnect_delay_sec": 0, 00:22:52.659 "fast_io_fail_timeout_sec": 0, 00:22:52.659 "disable_auto_failback": false, 00:22:52.659 "generate_uuids": false, 00:22:52.659 "transport_tos": 0, 00:22:52.659 "nvme_error_stat": false, 00:22:52.659 "rdma_srq_size": 0, 00:22:52.659 "io_path_stat": false, 00:22:52.659 "allow_accel_sequence": false, 00:22:52.659 "rdma_max_cq_size": 0, 00:22:52.659 "rdma_cm_event_timeout_ms": 0, 00:22:52.659 "dhchap_digests": [ 00:22:52.659 "sha256", 00:22:52.659 "sha384", 00:22:52.659 "sha512" 00:22:52.659 ], 00:22:52.659 "dhchap_dhgroups": [ 00:22:52.659 "null", 00:22:52.659 "ffdhe2048", 00:22:52.659 "ffdhe3072", 00:22:52.659 "ffdhe4096", 00:22:52.659 "ffdhe6144", 00:22:52.659 "ffdhe8192" 00:22:52.659 ] 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_nvme_attach_controller", 00:22:52.659 "params": { 00:22:52.659 "name": "nvme0", 00:22:52.659 "trtype": "TCP", 00:22:52.659 "adrfam": "IPv4", 00:22:52.659 "traddr": "10.0.0.2", 00:22:52.659 "trsvcid": "4420", 00:22:52.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.659 "prchk_reftag": false, 00:22:52.659 "prchk_guard": false, 00:22:52.659 "ctrlr_loss_timeout_sec": 0, 00:22:52.659 "reconnect_delay_sec": 0, 00:22:52.659 "fast_io_fail_timeout_sec": 0, 00:22:52.659 "psk": "key0", 00:22:52.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.659 "hdgst": false, 00:22:52.659 "ddgst": false 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_nvme_set_hotplug", 00:22:52.659 "params": { 00:22:52.659 "period_us": 100000, 00:22:52.659 "enable": false 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_enable_histogram", 00:22:52.659 "params": { 00:22:52.659 "name": "nvme0n1", 00:22:52.659 "enable": true 00:22:52.659 } 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "method": "bdev_wait_for_examine" 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }, 00:22:52.659 { 00:22:52.659 "subsystem": "nbd", 00:22:52.659 "config": [] 00:22:52.659 } 00:22:52.659 ] 00:22:52.659 }' 00:22:52.659 21:23:46 -- target/tls.sh@266 -- # killprocess 1505045 00:22:52.660 21:23:46 -- common/autotest_common.sh@936 -- # '[' -z 1505045 ']' 00:22:52.660 21:23:46 -- common/autotest_common.sh@940 -- # kill -0 1505045 00:22:52.660 21:23:46 -- common/autotest_common.sh@941 -- # uname 00:22:52.660 21:23:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.660 21:23:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1505045 00:22:52.660 21:23:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:52.660 21:23:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:52.660 21:23:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1505045' 00:22:52.660 killing process with pid 1505045 00:22:52.660 21:23:46 -- common/autotest_common.sh@955 -- # kill 1505045 00:22:52.660 Received shutdown signal, test time was about 1.000000 seconds 00:22:52.660 00:22:52.660 Latency(us) 00:22:52.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.660 =================================================================================================================== 00:22:52.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.660 21:23:46 -- common/autotest_common.sh@960 -- # wait 1505045 00:22:53.234 21:23:47 -- target/tls.sh@267 -- # killprocess 1504999 00:22:53.234 21:23:47 -- common/autotest_common.sh@936 -- # '[' -z 1504999 ']' 00:22:53.234 21:23:47 -- common/autotest_common.sh@940 -- # kill -0 1504999 00:22:53.234 21:23:47 -- common/autotest_common.sh@941 -- # uname 00:22:53.234 21:23:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.234 21:23:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1504999 00:22:53.234 21:23:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:53.234 21:23:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:53.234 21:23:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1504999' 00:22:53.234 killing process with pid 1504999 00:22:53.234 21:23:47 -- common/autotest_common.sh@955 -- # kill 1504999 00:22:53.234 21:23:47 -- common/autotest_common.sh@960 -- # wait 1504999 00:22:53.808 21:23:47 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:53.808 21:23:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:53.808 21:23:47 -- target/tls.sh@269 -- # echo '{ 00:22:53.808 "subsystems": [ 00:22:53.808 { 00:22:53.808 "subsystem": "keyring", 00:22:53.808 "config": [ 00:22:53.808 { 00:22:53.808 "method": "keyring_file_add_key", 00:22:53.808 "params": { 00:22:53.808 "name": "key0", 00:22:53.808 "path": "/tmp/tmp.SPeQoxy81J" 00:22:53.808 } 00:22:53.808 } 00:22:53.808 ] 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "subsystem": "iobuf", 00:22:53.808 "config": [ 00:22:53.808 { 00:22:53.808 "method": "iobuf_set_options", 00:22:53.808 "params": { 00:22:53.808 "small_pool_count": 8192, 00:22:53.808 "large_pool_count": 1024, 00:22:53.808 "small_bufsize": 8192, 00:22:53.808 "large_bufsize": 135168 00:22:53.808 } 00:22:53.808 } 00:22:53.808 ] 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "subsystem": "sock", 00:22:53.808 "config": [ 00:22:53.808 { 00:22:53.808 "method": "sock_impl_set_options", 00:22:53.808 "params": { 00:22:53.808 "impl_name": "posix", 00:22:53.808 "recv_buf_size": 2097152, 00:22:53.808 "send_buf_size": 2097152, 00:22:53.808 "enable_recv_pipe": true, 00:22:53.808 "enable_quickack": false, 00:22:53.808 "enable_placement_id": 0, 00:22:53.808 "enable_zerocopy_send_server": true, 00:22:53.808 "enable_zerocopy_send_client": false, 00:22:53.808 "zerocopy_threshold": 0, 00:22:53.808 "tls_version": 0, 00:22:53.808 "enable_ktls": false 00:22:53.808 } 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "method": "sock_impl_set_options", 00:22:53.808 "params": { 00:22:53.808 "impl_name": "ssl", 00:22:53.808 "recv_buf_size": 4096, 00:22:53.808 "send_buf_size": 4096, 00:22:53.808 "enable_recv_pipe": true, 00:22:53.808 "enable_quickack": false, 00:22:53.808 "enable_placement_id": 0, 00:22:53.808 "enable_zerocopy_send_server": true, 00:22:53.808 "enable_zerocopy_send_client": false, 00:22:53.808 "zerocopy_threshold": 0, 00:22:53.808 "tls_version": 0, 00:22:53.808 "enable_ktls": false 00:22:53.808 } 00:22:53.808 } 00:22:53.808 ] 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "subsystem": "vmd", 00:22:53.808 "config": [] 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "subsystem": "accel", 00:22:53.808 "config": [ 00:22:53.808 { 00:22:53.808 "method": "accel_set_options", 00:22:53.808 "params": { 00:22:53.808 "small_cache_size": 128, 00:22:53.808 "large_cache_size": 16, 00:22:53.808 "task_count": 2048, 00:22:53.808 "sequence_count": 2048, 00:22:53.808 "buf_count": 2048 00:22:53.808 } 00:22:53.808 } 00:22:53.808 ] 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "subsystem": "bdev", 00:22:53.808 "config": [ 00:22:53.808 { 00:22:53.808 "method": "bdev_set_options", 00:22:53.808 "params": { 00:22:53.808 "bdev_io_pool_size": 65535, 00:22:53.808 "bdev_io_cache_size": 256, 00:22:53.808 "bdev_auto_examine": true, 00:22:53.808 "iobuf_small_cache_size": 128, 00:22:53.808 "iobuf_large_cache_size": 16 00:22:53.808 } 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "method": "bdev_raid_set_options", 00:22:53.808 "params": { 00:22:53.808 "process_window_size_kb": 1024 00:22:53.808 } 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "method": "bdev_iscsi_set_options", 00:22:53.808 "params": { 00:22:53.808 "timeout_sec": 30 00:22:53.808 } 00:22:53.808 }, 00:22:53.808 { 00:22:53.808 "method": "bdev_nvme_set_options", 00:22:53.808 "params": { 00:22:53.808 "action_on_timeout": "none", 00:22:53.808 "timeout_us": 0, 00:22:53.808 "timeout_admin_us": 0, 00:22:53.808 "keep_alive_timeout_ms": 10000, 00:22:53.808 "arbitration_burst": 0, 00:22:53.808 "low_priority_weight": 0, 00:22:53.808 "medium_priority_weight": 0, 00:22:53.808 "high_priority_weight": 0, 00:22:53.808 "nvme_adminq_poll_period_us": 10000, 00:22:53.809 "nvme_ioq_poll_period_us": 0, 00:22:53.809 "io_queue_requests": 0, 00:22:53.809 "delay_cmd_submit": true, 00:22:53.809 "transport_retry_count": 4, 00:22:53.809 "bdev_retry_count": 3, 00:22:53.809 "transport_ack_timeout": 0, 00:22:53.809 "ctrlr_loss_timeout_sec": 0, 00:22:53.809 "reconnect_delay_sec": 0, 00:22:53.809 "fast_io_fail_timeout_sec": 0, 00:22:53.809 "disable_auto_failback": false, 00:22:53.809 "generate_uuids": false, 00:22:53.809 "transport_tos": 0, 00:22:53.809 "nvme_error_stat": false, 00:22:53.809 "rdma_srq_size": 0, 00:22:53.809 "io_path_stat": false, 00:22:53.809 "allow_accel_sequence": false, 00:22:53.809 "rdma_max_cq_size": 0, 00:22:53.809 "rdma_cm_event_timeout_ms": 0, 00:22:53.809 "dhchap_digests": [ 00:22:53.809 "sha256", 00:22:53.809 "sha384", 00:22:53.809 "sha512" 00:22:53.809 ], 00:22:53.809 "dhchap_dhgroups": [ 00:22:53.809 "null", 00:22:53.809 "ffdhe2048", 00:22:53.809 "ffdhe3072", 00:22:53.809 "ffdhe4096", 00:22:53.809 "ffdhe6144", 00:22:53.809 "ffdhe8192" 00:22:53.809 ] 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "bdev_nvme_set_hotplug", 00:22:53.809 "params": { 00:22:53.809 "period_us": 100000, 00:22:53.809 "enable": false 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "bdev_malloc_create", 00:22:53.809 "params": { 00:22:53.809 "name": "malloc0", 00:22:53.809 "num_blocks": 8192, 00:22:53.809 "block_size": 4096, 00:22:53.809 "physical_block_size": 4096, 00:22:53.809 "uuid": "fb4fab8f-4db1-405e-a258-8254275a9c30", 00:22:53.809 "optimal_io_boundary": 0 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "bdev_wait_for_examine" 00:22:53.809 } 00:22:53.809 ] 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "subsystem": "nbd", 00:22:53.809 "config": [] 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "subsystem": "scheduler", 00:22:53.809 "config": [ 00:22:53.809 { 00:22:53.809 "method": "framework_set_scheduler", 00:22:53.809 "params": { 00:22:53.809 "name": "static" 00:22:53.809 } 00:22:53.809 } 00:22:53.809 ] 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "subsystem": "nvmf", 00:22:53.809 "config": [ 00:22:53.809 { 00:22:53.809 "method": "nvmf_set_config", 00:22:53.809 "params": { 00:22:53.809 "discovery_filter": "match_any", 00:22:53.809 "admin_cmd_passthru": { 00:22:53.809 "identify_ctrlr": false 00:22:53.809 } 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_set_max_subsystems", 00:22:53.809 "params": { 00:22:53.809 "max_subsystems": 1024 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_set_crdt", 00:22:53.809 "params": { 00:22:53.809 "crdt1": 0, 00:22:53.809 "crdt2": 0, 00:22:53.809 "crdt3": 0 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_create_transport", 00:22:53.809 "params": { 00:22:53.809 "trtype": "TCP", 00:22:53.809 "max_queue_depth": 128, 00:22:53.809 "max_io_qpairs_per_ctrlr": 127, 00:22:53.809 "in_capsule_data_size": 4096, 00:22:53.809 "max_io_size": 131072, 00:22:53.809 "io_unit_size": 131072, 00:22:53.809 "max_aq_depth": 128, 00:22:53.809 "num_shared_buffers": 511, 00:22:53.809 "buf_cache_size": 4294967295, 00:22:53.809 "dif_insert_or_strip": false, 00:22:53.809 "zcopy": false, 00:22:53.809 "c2h_success": false, 00:22:53.809 "sock_priority": 0, 00:22:53.809 "abort_timeout_sec": 1, 00:22:53.809 "ack_timeout": 0, 00:22:53.809 "data_wr_pool_size": 0 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_create_subsystem", 00:22:53.809 "params": { 00:22:53.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.809 "allow_any_host": false, 00:22:53.809 "serial_number": "00000000000000000000", 00:22:53.809 "model_number": "SPDK bdev Controller", 00:22:53.809 "max_namespaces": 32, 00:22:53.809 "min_cntlid": 1, 00:22:53.809 "max_cntlid": 65519, 00:22:53.809 "ana_reporting": false 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_subsystem_add_host", 00:22:53.809 "params": { 00:22:53.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.809 "host": "nqn.2016-06.io.spdk:host1", 00:22:53.809 "psk": "key0" 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_subsystem_add_ns", 00:22:53.809 "params": { 00:22:53.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.809 "namespace": { 00:22:53.809 "nsid": 1, 00:22:53.809 "bdev_name": "malloc0", 00:22:53.809 "nguid": "FB4FAB8F4DB1405EA2588254275A9C30", 00:22:53.809 "uuid": "fb4fab8f-4db1-405e-a258-8254275a9c30", 00:22:53.809 "no_auto_visible": false 00:22:53.809 } 00:22:53.809 } 00:22:53.809 }, 00:22:53.809 { 00:22:53.809 "method": "nvmf_subsystem_add_listener", 00:22:53.809 "params": { 00:22:53.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.809 "listen_address": { 00:22:53.809 "trtype": "TCP", 00:22:53.809 "adrfam": "IPv4", 00:22:53.809 "traddr": "10.0.0.2", 00:22:53.809 "trsvcid": "4420" 00:22:53.809 }, 00:22:53.809 "secure_channel": true 00:22:53.809 } 00:22:53.809 } 00:22:53.809 ] 00:22:53.809 } 00:22:53.809 ] 00:22:53.809 }' 00:22:53.809 21:23:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:53.809 21:23:47 -- common/autotest_common.sh@10 -- # set +x 00:22:53.809 21:23:47 -- nvmf/common.sh@470 -- # nvmfpid=1505890 00:22:53.809 21:23:47 -- nvmf/common.sh@471 -- # waitforlisten 1505890 00:22:53.809 21:23:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:53.809 21:23:47 -- common/autotest_common.sh@817 -- # '[' -z 1505890 ']' 00:22:53.809 21:23:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.809 21:23:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:53.809 21:23:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.809 21:23:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:53.809 21:23:47 -- common/autotest_common.sh@10 -- # set +x 00:22:53.809 [2024-04-23 21:23:47.850473] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:53.809 [2024-04-23 21:23:47.850551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.809 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.809 [2024-04-23 21:23:47.943247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.809 [2024-04-23 21:23:48.034601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.809 [2024-04-23 21:23:48.034646] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.809 [2024-04-23 21:23:48.034657] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.809 [2024-04-23 21:23:48.034666] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.809 [2024-04-23 21:23:48.034674] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.809 [2024-04-23 21:23:48.034761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.070 [2024-04-23 21:23:48.328679] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.332 [2024-04-23 21:23:48.360642] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.332 [2024-04-23 21:23:48.360899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.332 21:23:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:54.332 21:23:48 -- common/autotest_common.sh@850 -- # return 0 00:22:54.332 21:23:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:54.332 21:23:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:54.332 21:23:48 -- common/autotest_common.sh@10 -- # set +x 00:22:54.594 21:23:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.594 21:23:48 -- target/tls.sh@272 -- # bdevperf_pid=1505957 00:22:54.594 21:23:48 -- target/tls.sh@273 -- # waitforlisten 1505957 /var/tmp/bdevperf.sock 00:22:54.594 21:23:48 -- common/autotest_common.sh@817 -- # '[' -z 1505957 ']' 00:22:54.594 21:23:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.594 21:23:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.594 21:23:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.594 21:23:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.594 21:23:48 -- common/autotest_common.sh@10 -- # set +x 00:22:54.594 21:23:48 -- target/tls.sh@270 -- # echo '{ 00:22:54.594 "subsystems": [ 00:22:54.594 { 00:22:54.594 "subsystem": "keyring", 00:22:54.594 "config": [ 00:22:54.594 { 00:22:54.594 "method": "keyring_file_add_key", 00:22:54.594 "params": { 00:22:54.594 "name": "key0", 00:22:54.594 "path": "/tmp/tmp.SPeQoxy81J" 00:22:54.594 } 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "iobuf", 00:22:54.594 "config": [ 00:22:54.594 { 00:22:54.594 "method": "iobuf_set_options", 00:22:54.594 "params": { 00:22:54.594 "small_pool_count": 8192, 00:22:54.594 "large_pool_count": 1024, 00:22:54.594 "small_bufsize": 8192, 00:22:54.594 "large_bufsize": 135168 00:22:54.594 } 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "sock", 00:22:54.594 "config": [ 00:22:54.594 { 00:22:54.594 "method": "sock_impl_set_options", 00:22:54.594 "params": { 00:22:54.594 "impl_name": "posix", 00:22:54.594 "recv_buf_size": 2097152, 00:22:54.594 "send_buf_size": 2097152, 00:22:54.594 "enable_recv_pipe": true, 00:22:54.594 "enable_quickack": false, 00:22:54.594 "enable_placement_id": 0, 00:22:54.594 "enable_zerocopy_send_server": true, 00:22:54.594 "enable_zerocopy_send_client": false, 00:22:54.594 "zerocopy_threshold": 0, 00:22:54.594 "tls_version": 0, 00:22:54.594 "enable_ktls": false 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "sock_impl_set_options", 00:22:54.594 "params": { 00:22:54.594 "impl_name": "ssl", 00:22:54.594 "recv_buf_size": 4096, 00:22:54.594 "send_buf_size": 4096, 00:22:54.594 "enable_recv_pipe": true, 00:22:54.594 "enable_quickack": false, 00:22:54.594 "enable_placement_id": 0, 00:22:54.594 "enable_zerocopy_send_server": true, 00:22:54.594 "enable_zerocopy_send_client": false, 00:22:54.594 "zerocopy_threshold": 0, 00:22:54.594 "tls_version": 0, 00:22:54.594 "enable_ktls": false 00:22:54.594 } 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "vmd", 00:22:54.594 "config": [] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "accel", 00:22:54.594 "config": [ 00:22:54.594 { 00:22:54.594 "method": "accel_set_options", 00:22:54.594 "params": { 00:22:54.594 "small_cache_size": 128, 00:22:54.594 "large_cache_size": 16, 00:22:54.594 "task_count": 2048, 00:22:54.594 "sequence_count": 2048, 00:22:54.594 "buf_count": 2048 00:22:54.594 } 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "bdev", 00:22:54.594 "config": [ 00:22:54.594 { 00:22:54.594 "method": "bdev_set_options", 00:22:54.594 "params": { 00:22:54.594 "bdev_io_pool_size": 65535, 00:22:54.594 "bdev_io_cache_size": 256, 00:22:54.594 "bdev_auto_examine": true, 00:22:54.594 "iobuf_small_cache_size": 128, 00:22:54.594 "iobuf_large_cache_size": 16 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_raid_set_options", 00:22:54.594 "params": { 00:22:54.594 "process_window_size_kb": 1024 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_iscsi_set_options", 00:22:54.594 "params": { 00:22:54.594 "timeout_sec": 30 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_nvme_set_options", 00:22:54.594 "params": { 00:22:54.594 "action_on_timeout": "none", 00:22:54.594 "timeout_us": 0, 00:22:54.594 "timeout_admin_us": 0, 00:22:54.594 "keep_alive_timeout_ms": 10000, 00:22:54.594 "arbitration_burst": 0, 00:22:54.594 "low_priority_weight": 0, 00:22:54.594 "medium_priority_weight": 0, 00:22:54.594 "high_priority_weight": 0, 00:22:54.594 "nvme_adminq_poll_period_us": 10000, 00:22:54.594 "nvme_ioq_poll_period_us": 0, 00:22:54.594 "io_queue_requests": 512, 00:22:54.594 "delay_cmd_submit": true, 00:22:54.594 "transport_retry_count": 4, 00:22:54.594 "bdev_retry_count": 3, 00:22:54.594 "transport_ack_timeout": 0, 00:22:54.594 "ctrlr_loss_timeout_sec": 0, 00:22:54.594 "reconnect_delay_sec": 0, 00:22:54.594 "fast_io_fail_timeout_sec": 0, 00:22:54.594 "disable_auto_failback": false, 00:22:54.594 "generate_uuids": false, 00:22:54.594 "transport_tos": 0, 00:22:54.594 "nvme_error_stat": false, 00:22:54.594 "rdma_srq_size": 0, 00:22:54.594 "io_path_stat": false, 00:22:54.594 "allow_accel_sequence": false, 00:22:54.594 "rdma_max_cq_size": 0, 00:22:54.594 "rdma_cm_event_timeout_ms": 0, 00:22:54.594 "dhchap_digests": [ 00:22:54.594 "sha256", 00:22:54.594 "sha384", 00:22:54.594 "sha512" 00:22:54.594 ], 00:22:54.594 "dhchap_dhgroups": [ 00:22:54.594 "null", 00:22:54.594 "ffdhe2048", 00:22:54.594 "ffdhe3072", 00:22:54.594 "ffdhe4096", 00:22:54.594 "ffdhe6144", 00:22:54.594 "ffdhe8192" 00:22:54.594 ] 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_nvme_attach_controller", 00:22:54.594 "params": { 00:22:54.594 "name": "nvme0", 00:22:54.594 "trtype": "TCP", 00:22:54.594 "adrfam": "IPv4", 00:22:54.594 "traddr": "10.0.0.2", 00:22:54.594 "trsvcid": "4420", 00:22:54.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.594 "prchk_reftag": false, 00:22:54.594 "prchk_guard": false, 00:22:54.594 "ctrlr_loss_timeout_sec": 0, 00:22:54.594 "reconnect_delay_sec": 0, 00:22:54.594 "fast_io_fail_timeout_sec": 0, 00:22:54.594 "psk": "key0", 00:22:54.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.594 "hdgst": false, 00:22:54.594 "ddgst": false 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_nvme_set_hotplug", 00:22:54.594 "params": { 00:22:54.594 "period_us": 100000, 00:22:54.594 "enable": false 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_enable_histogram", 00:22:54.594 "params": { 00:22:54.594 "name": "nvme0n1", 00:22:54.594 "enable": true 00:22:54.594 } 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "method": "bdev_wait_for_examine" 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }, 00:22:54.594 { 00:22:54.594 "subsystem": "nbd", 00:22:54.594 "config": [] 00:22:54.594 } 00:22:54.594 ] 00:22:54.594 }' 00:22:54.594 21:23:48 -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:54.594 [2024-04-23 21:23:48.726059] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:22:54.594 [2024-04-23 21:23:48.726211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505957 ] 00:22:54.594 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.594 [2024-04-23 21:23:48.860649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.854 [2024-04-23 21:23:48.958197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.114 [2024-04-23 21:23:49.177565] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.375 21:23:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.375 21:23:49 -- common/autotest_common.sh@850 -- # return 0 00:22:55.375 21:23:49 -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.375 21:23:49 -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:55.375 21:23:49 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.375 21:23:49 -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.636 Running I/O for 1 seconds... 00:22:56.573 00:22:56.573 Latency(us) 00:22:56.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.573 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:56.573 Verification LBA range: start 0x0 length 0x2000 00:22:56.574 nvme0n1 : 1.03 2645.26 10.33 0.00 0.00 47706.36 6519.11 107617.01 00:22:56.574 =================================================================================================================== 00:22:56.574 Total : 2645.26 10.33 0.00 0.00 47706.36 6519.11 107617.01 00:22:56.574 0 00:22:56.574 21:23:50 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:56.574 21:23:50 -- target/tls.sh@279 -- # cleanup 00:22:56.574 21:23:50 -- target/tls.sh@15 -- # process_shm --id 0 00:22:56.574 21:23:50 -- common/autotest_common.sh@794 -- # type=--id 00:22:56.574 21:23:50 -- common/autotest_common.sh@795 -- # id=0 00:22:56.574 21:23:50 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:56.574 21:23:50 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:56.574 21:23:50 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:56.574 21:23:50 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:56.574 21:23:50 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:56.574 21:23:50 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:56.574 nvmf_trace.0 00:22:56.574 21:23:50 -- common/autotest_common.sh@809 -- # return 0 00:22:56.574 21:23:50 -- target/tls.sh@16 -- # killprocess 1505957 00:22:56.574 21:23:50 -- common/autotest_common.sh@936 -- # '[' -z 1505957 ']' 00:22:56.574 21:23:50 -- common/autotest_common.sh@940 -- # kill -0 1505957 00:22:56.574 21:23:50 -- common/autotest_common.sh@941 -- # uname 00:22:56.574 21:23:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.574 21:23:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1505957 00:22:56.574 21:23:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.574 21:23:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.574 21:23:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1505957' 00:22:56.574 killing process with pid 1505957 00:22:56.574 21:23:50 -- common/autotest_common.sh@955 -- # kill 1505957 00:22:56.574 Received shutdown signal, test time was about 1.000000 seconds 00:22:56.574 00:22:56.574 Latency(us) 00:22:56.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.574 =================================================================================================================== 00:22:56.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.574 21:23:50 -- common/autotest_common.sh@960 -- # wait 1505957 00:22:57.145 21:23:51 -- target/tls.sh@17 -- # nvmftestfini 00:22:57.145 21:23:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:57.145 21:23:51 -- nvmf/common.sh@117 -- # sync 00:22:57.145 21:23:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.145 21:23:51 -- nvmf/common.sh@120 -- # set +e 00:22:57.145 21:23:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.146 21:23:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.146 rmmod nvme_tcp 00:22:57.146 rmmod nvme_fabrics 00:22:57.146 rmmod nvme_keyring 00:22:57.146 21:23:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.146 21:23:51 -- nvmf/common.sh@124 -- # set -e 00:22:57.146 21:23:51 -- nvmf/common.sh@125 -- # return 0 00:22:57.146 21:23:51 -- nvmf/common.sh@478 -- # '[' -n 1505890 ']' 00:22:57.146 21:23:51 -- nvmf/common.sh@479 -- # killprocess 1505890 00:22:57.146 21:23:51 -- common/autotest_common.sh@936 -- # '[' -z 1505890 ']' 00:22:57.146 21:23:51 -- common/autotest_common.sh@940 -- # kill -0 1505890 00:22:57.146 21:23:51 -- common/autotest_common.sh@941 -- # uname 00:22:57.146 21:23:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.146 21:23:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1505890 00:22:57.146 21:23:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:57.146 21:23:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:57.146 21:23:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1505890' 00:22:57.146 killing process with pid 1505890 00:22:57.146 21:23:51 -- common/autotest_common.sh@955 -- # kill 1505890 00:22:57.146 21:23:51 -- common/autotest_common.sh@960 -- # wait 1505890 00:22:57.715 21:23:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:57.716 21:23:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:57.716 21:23:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:57.716 21:23:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.716 21:23:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.716 21:23:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.716 21:23:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.716 21:23:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.622 21:23:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.622 21:23:53 -- target/tls.sh@18 -- # rm -f /tmp/tmp.bZcTclfEgH /tmp/tmp.93avnLhUSa /tmp/tmp.SPeQoxy81J 00:22:59.622 00:22:59.622 real 1m26.307s 00:22:59.622 user 2m12.901s 00:22:59.622 sys 0m25.902s 00:22:59.622 21:23:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:59.622 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:22:59.622 ************************************ 00:22:59.622 END TEST nvmf_tls 00:22:59.622 ************************************ 00:22:59.883 21:23:53 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:59.883 21:23:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:59.883 21:23:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:59.883 21:23:53 -- common/autotest_common.sh@10 -- # set +x 00:22:59.883 ************************************ 00:22:59.883 START TEST nvmf_fips 00:22:59.883 ************************************ 00:22:59.883 21:23:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:59.883 * Looking for test storage... 00:22:59.883 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:22:59.883 21:23:54 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.883 21:23:54 -- nvmf/common.sh@7 -- # uname -s 00:22:59.883 21:23:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.883 21:23:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.883 21:23:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.883 21:23:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.883 21:23:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.883 21:23:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.883 21:23:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.883 21:23:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.883 21:23:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.883 21:23:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.883 21:23:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:59.883 21:23:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:22:59.883 21:23:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.883 21:23:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.883 21:23:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:59.883 21:23:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.883 21:23:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:59.883 21:23:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.883 21:23:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.883 21:23:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.883 21:23:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.883 21:23:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.883 21:23:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.883 21:23:54 -- paths/export.sh@5 -- # export PATH 00:22:59.883 21:23:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.883 21:23:54 -- nvmf/common.sh@47 -- # : 0 00:22:59.883 21:23:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.883 21:23:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.883 21:23:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.883 21:23:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.883 21:23:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.883 21:23:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.883 21:23:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.883 21:23:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.883 21:23:54 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:59.883 21:23:54 -- fips/fips.sh@89 -- # check_openssl_version 00:22:59.883 21:23:54 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:59.883 21:23:54 -- fips/fips.sh@85 -- # openssl version 00:22:59.883 21:23:54 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:59.883 21:23:54 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:59.883 21:23:54 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:59.883 21:23:54 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:59.883 21:23:54 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:59.883 21:23:54 -- scripts/common.sh@333 -- # IFS=.-: 00:22:59.883 21:23:54 -- scripts/common.sh@333 -- # read -ra ver1 00:22:59.883 21:23:54 -- scripts/common.sh@334 -- # IFS=.-: 00:22:59.883 21:23:54 -- scripts/common.sh@334 -- # read -ra ver2 00:22:59.883 21:23:54 -- scripts/common.sh@335 -- # local 'op=>=' 00:22:59.883 21:23:54 -- scripts/common.sh@337 -- # ver1_l=3 00:22:59.883 21:23:54 -- scripts/common.sh@338 -- # ver2_l=3 00:22:59.883 21:23:54 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:59.883 21:23:54 -- scripts/common.sh@341 -- # case "$op" in 00:22:59.883 21:23:54 -- scripts/common.sh@345 -- # : 1 00:22:59.883 21:23:54 -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:59.883 21:23:54 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.883 21:23:54 -- scripts/common.sh@362 -- # decimal 3 00:22:59.883 21:23:54 -- scripts/common.sh@350 -- # local d=3 00:22:59.883 21:23:54 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:59.883 21:23:54 -- scripts/common.sh@352 -- # echo 3 00:22:59.883 21:23:54 -- scripts/common.sh@362 -- # ver1[v]=3 00:22:59.883 21:23:54 -- scripts/common.sh@363 -- # decimal 3 00:22:59.883 21:23:54 -- scripts/common.sh@350 -- # local d=3 00:22:59.883 21:23:54 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:59.883 21:23:54 -- scripts/common.sh@352 -- # echo 3 00:22:59.883 21:23:54 -- scripts/common.sh@363 -- # ver2[v]=3 00:22:59.883 21:23:54 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:59.883 21:23:54 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:59.883 21:23:54 -- scripts/common.sh@361 -- # (( v++ )) 00:22:59.883 21:23:54 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.883 21:23:54 -- scripts/common.sh@362 -- # decimal 0 00:22:59.883 21:23:54 -- scripts/common.sh@350 -- # local d=0 00:22:59.883 21:23:54 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:59.883 21:23:54 -- scripts/common.sh@352 -- # echo 0 00:22:59.883 21:23:54 -- scripts/common.sh@362 -- # ver1[v]=0 00:22:59.883 21:23:54 -- scripts/common.sh@363 -- # decimal 0 00:22:59.883 21:23:54 -- scripts/common.sh@350 -- # local d=0 00:22:59.883 21:23:54 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:59.883 21:23:54 -- scripts/common.sh@352 -- # echo 0 00:22:59.883 21:23:54 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:59.883 21:23:54 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:59.883 21:23:54 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:59.884 21:23:54 -- scripts/common.sh@361 -- # (( v++ )) 00:22:59.884 21:23:54 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.884 21:23:54 -- scripts/common.sh@362 -- # decimal 9 00:22:59.884 21:23:54 -- scripts/common.sh@350 -- # local d=9 00:22:59.884 21:23:54 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:59.884 21:23:54 -- scripts/common.sh@352 -- # echo 9 00:22:59.884 21:23:54 -- scripts/common.sh@362 -- # ver1[v]=9 00:22:59.884 21:23:54 -- scripts/common.sh@363 -- # decimal 0 00:22:59.884 21:23:54 -- scripts/common.sh@350 -- # local d=0 00:22:59.884 21:23:54 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:59.884 21:23:54 -- scripts/common.sh@352 -- # echo 0 00:22:59.884 21:23:54 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:59.884 21:23:54 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:59.884 21:23:54 -- scripts/common.sh@364 -- # return 0 00:22:59.884 21:23:54 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:59.884 21:23:54 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:59.884 21:23:54 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:59.884 21:23:54 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:59.884 21:23:54 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:59.884 21:23:54 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:59.884 21:23:54 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:59.884 21:23:54 -- fips/fips.sh@113 -- # build_openssl_config 00:22:59.884 21:23:54 -- fips/fips.sh@37 -- # cat 00:22:59.884 21:23:54 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:59.884 21:23:54 -- fips/fips.sh@58 -- # cat - 00:22:59.884 21:23:54 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:59.884 21:23:54 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:59.884 21:23:54 -- fips/fips.sh@116 -- # mapfile -t providers 00:22:59.884 21:23:54 -- fips/fips.sh@116 -- # openssl list -providers 00:22:59.884 21:23:54 -- fips/fips.sh@116 -- # grep name 00:23:00.144 21:23:54 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:00.144 21:23:54 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:00.144 21:23:54 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:00.144 21:23:54 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:00.144 21:23:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:00.144 21:23:54 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:00.144 21:23:54 -- common/autotest_common.sh@626 -- # local arg=openssl 00:23:00.144 21:23:54 -- fips/fips.sh@127 -- # : 00:23:00.144 21:23:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:00.144 21:23:54 -- common/autotest_common.sh@630 -- # type -t openssl 00:23:00.144 21:23:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:00.144 21:23:54 -- common/autotest_common.sh@632 -- # type -P openssl 00:23:00.144 21:23:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:00.144 21:23:54 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:23:00.144 21:23:54 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:23:00.144 21:23:54 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:23:00.144 Error setting digest 00:23:00.144 00620041EA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:00.144 00620041EA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:00.144 21:23:54 -- common/autotest_common.sh@641 -- # es=1 00:23:00.144 21:23:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:00.144 21:23:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:00.144 21:23:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:00.144 21:23:54 -- fips/fips.sh@130 -- # nvmftestinit 00:23:00.144 21:23:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:00.144 21:23:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.144 21:23:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:00.144 21:23:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:00.144 21:23:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:00.144 21:23:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.144 21:23:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.144 21:23:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.144 21:23:54 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:00.144 21:23:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:00.144 21:23:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.144 21:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:05.435 21:23:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:05.435 21:23:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.435 21:23:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.435 21:23:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.435 21:23:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.435 21:23:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.435 21:23:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.435 21:23:59 -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.435 21:23:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.435 21:23:59 -- nvmf/common.sh@296 -- # e810=() 00:23:05.435 21:23:59 -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.435 21:23:59 -- nvmf/common.sh@297 -- # x722=() 00:23:05.435 21:23:59 -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.435 21:23:59 -- nvmf/common.sh@298 -- # mlx=() 00:23:05.435 21:23:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.435 21:23:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.435 21:23:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.435 21:23:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.435 21:23:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:05.435 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:05.435 21:23:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.435 21:23:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:05.435 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:05.435 21:23:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.435 21:23:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.435 21:23:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.435 21:23:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:05.435 Found net devices under 0000:27:00.0: cvl_0_0 00:23:05.435 21:23:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.435 21:23:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.435 21:23:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.435 21:23:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.435 21:23:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:05.435 Found net devices under 0000:27:00.1: cvl_0_1 00:23:05.435 21:23:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.435 21:23:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:05.435 21:23:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:05.435 21:23:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:05.435 21:23:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.435 21:23:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.435 21:23:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.435 21:23:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.435 21:23:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.435 21:23:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.435 21:23:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.435 21:23:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.435 21:23:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.435 21:23:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.435 21:23:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.435 21:23:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.435 21:23:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.435 21:23:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.435 21:23:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.435 21:23:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.435 21:23:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.435 21:23:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.435 21:23:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.435 21:23:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:05.435 00:23:05.435 --- 10.0.0.2 ping statistics --- 00:23:05.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.436 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:05.436 21:23:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.488 ms 00:23:05.436 00:23:05.436 --- 10.0.0.1 ping statistics --- 00:23:05.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.436 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:23:05.436 21:23:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.436 21:23:59 -- nvmf/common.sh@411 -- # return 0 00:23:05.436 21:23:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:05.436 21:23:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.436 21:23:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:05.436 21:23:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:05.436 21:23:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.436 21:23:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:05.436 21:23:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:05.697 21:23:59 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:05.697 21:23:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.697 21:23:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.697 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:05.697 21:23:59 -- nvmf/common.sh@470 -- # nvmfpid=1510470 00:23:05.697 21:23:59 -- nvmf/common.sh@471 -- # waitforlisten 1510470 00:23:05.697 21:23:59 -- common/autotest_common.sh@817 -- # '[' -z 1510470 ']' 00:23:05.697 21:23:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.697 21:23:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.697 21:23:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.697 21:23:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.697 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:05.697 21:23:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.697 [2024-04-23 21:23:59.836133] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:23:05.697 [2024-04-23 21:23:59.836246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.697 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.697 [2024-04-23 21:23:59.957314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.959 [2024-04-23 21:24:00.068717] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.959 [2024-04-23 21:24:00.068760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.959 [2024-04-23 21:24:00.068771] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.959 [2024-04-23 21:24:00.068782] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.959 [2024-04-23 21:24:00.068790] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.959 [2024-04-23 21:24:00.068819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.221 21:24:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.221 21:24:00 -- common/autotest_common.sh@850 -- # return 0 00:23:06.221 21:24:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.221 21:24:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.221 21:24:00 -- common/autotest_common.sh@10 -- # set +x 00:23:06.480 21:24:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.480 21:24:00 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:06.480 21:24:00 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:06.480 21:24:00 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.480 21:24:00 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:06.480 21:24:00 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.480 21:24:00 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.480 21:24:00 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.480 21:24:00 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:06.480 [2024-04-23 21:24:00.637407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.480 [2024-04-23 21:24:00.653350] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.480 [2024-04-23 21:24:00.653533] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.480 [2024-04-23 21:24:00.700691] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.480 malloc0 00:23:06.480 21:24:00 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.480 21:24:00 -- fips/fips.sh@147 -- # bdevperf_pid=1510827 00:23:06.480 21:24:00 -- fips/fips.sh@148 -- # waitforlisten 1510827 /var/tmp/bdevperf.sock 00:23:06.480 21:24:00 -- common/autotest_common.sh@817 -- # '[' -z 1510827 ']' 00:23:06.480 21:24:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.480 21:24:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.480 21:24:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.480 21:24:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.480 21:24:00 -- common/autotest_common.sh@10 -- # set +x 00:23:06.480 21:24:00 -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.739 [2024-04-23 21:24:00.784881] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:23:06.739 [2024-04-23 21:24:00.784963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510827 ] 00:23:06.739 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.739 [2024-04-23 21:24:00.868964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.739 [2024-04-23 21:24:00.963727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.306 21:24:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.306 21:24:01 -- common/autotest_common.sh@850 -- # return 0 00:23:07.306 21:24:01 -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.566 [2024-04-23 21:24:01.606919] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.566 [2024-04-23 21:24:01.607034] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.566 TLSTESTn1 00:23:07.566 21:24:01 -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.566 Running I/O for 10 seconds... 00:23:17.563 00:23:17.563 Latency(us) 00:23:17.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.563 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.563 Verification LBA range: start 0x0 length 0x2000 00:23:17.563 TLSTESTn1 : 10.03 3307.73 12.92 0.00 0.00 38623.33 6415.63 94923.72 00:23:17.563 =================================================================================================================== 00:23:17.563 Total : 3307.73 12.92 0.00 0.00 38623.33 6415.63 94923.72 00:23:17.563 0 00:23:17.563 21:24:11 -- fips/fips.sh@1 -- # cleanup 00:23:17.563 21:24:11 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:17.563 21:24:11 -- common/autotest_common.sh@794 -- # type=--id 00:23:17.563 21:24:11 -- common/autotest_common.sh@795 -- # id=0 00:23:17.563 21:24:11 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:17.563 21:24:11 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:17.824 21:24:11 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:17.824 21:24:11 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:17.824 21:24:11 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:17.824 21:24:11 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:17.824 nvmf_trace.0 00:23:17.825 21:24:11 -- common/autotest_common.sh@809 -- # return 0 00:23:17.825 21:24:11 -- fips/fips.sh@16 -- # killprocess 1510827 00:23:17.825 21:24:11 -- common/autotest_common.sh@936 -- # '[' -z 1510827 ']' 00:23:17.825 21:24:11 -- common/autotest_common.sh@940 -- # kill -0 1510827 00:23:17.825 21:24:11 -- common/autotest_common.sh@941 -- # uname 00:23:17.825 21:24:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.825 21:24:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1510827 00:23:17.825 21:24:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:17.825 21:24:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:17.825 21:24:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1510827' 00:23:17.825 killing process with pid 1510827 00:23:17.825 21:24:11 -- common/autotest_common.sh@955 -- # kill 1510827 00:23:17.825 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.825 00:23:17.825 Latency(us) 00:23:17.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.825 =================================================================================================================== 00:23:17.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.825 [2024-04-23 21:24:11.945028] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:17.825 21:24:11 -- common/autotest_common.sh@960 -- # wait 1510827 00:23:18.084 21:24:12 -- fips/fips.sh@17 -- # nvmftestfini 00:23:18.084 21:24:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:18.084 21:24:12 -- nvmf/common.sh@117 -- # sync 00:23:18.084 21:24:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.084 21:24:12 -- nvmf/common.sh@120 -- # set +e 00:23:18.084 21:24:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.084 21:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.084 rmmod nvme_tcp 00:23:18.084 rmmod nvme_fabrics 00:23:18.344 rmmod nvme_keyring 00:23:18.344 21:24:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.344 21:24:12 -- nvmf/common.sh@124 -- # set -e 00:23:18.344 21:24:12 -- nvmf/common.sh@125 -- # return 0 00:23:18.344 21:24:12 -- nvmf/common.sh@478 -- # '[' -n 1510470 ']' 00:23:18.344 21:24:12 -- nvmf/common.sh@479 -- # killprocess 1510470 00:23:18.344 21:24:12 -- common/autotest_common.sh@936 -- # '[' -z 1510470 ']' 00:23:18.344 21:24:12 -- common/autotest_common.sh@940 -- # kill -0 1510470 00:23:18.344 21:24:12 -- common/autotest_common.sh@941 -- # uname 00:23:18.344 21:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.344 21:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1510470 00:23:18.344 21:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.344 21:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.344 21:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1510470' 00:23:18.344 killing process with pid 1510470 00:23:18.344 21:24:12 -- common/autotest_common.sh@955 -- # kill 1510470 00:23:18.344 [2024-04-23 21:24:12.440221] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:18.344 21:24:12 -- common/autotest_common.sh@960 -- # wait 1510470 00:23:18.913 21:24:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:18.913 21:24:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:18.913 21:24:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:18.913 21:24:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.913 21:24:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.913 21:24:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.913 21:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.913 21:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.822 21:24:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:20.822 21:24:15 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:20.822 00:23:20.822 real 0m21.020s 00:23:20.822 user 0m23.410s 00:23:20.822 sys 0m8.044s 00:23:20.822 21:24:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.822 21:24:15 -- common/autotest_common.sh@10 -- # set +x 00:23:20.822 ************************************ 00:23:20.822 END TEST nvmf_fips 00:23:20.822 ************************************ 00:23:20.822 21:24:15 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:23:20.822 21:24:15 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:20.822 21:24:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:20.823 21:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.823 21:24:15 -- common/autotest_common.sh@10 -- # set +x 00:23:21.083 ************************************ 00:23:21.083 START TEST nvmf_fuzz 00:23:21.083 ************************************ 00:23:21.083 21:24:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.083 * Looking for test storage... 00:23:21.083 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:23:21.083 21:24:15 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.083 21:24:15 -- nvmf/common.sh@7 -- # uname -s 00:23:21.083 21:24:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.083 21:24:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.083 21:24:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.083 21:24:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.083 21:24:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.083 21:24:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.083 21:24:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.083 21:24:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.083 21:24:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.083 21:24:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.083 21:24:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:21.083 21:24:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:23:21.083 21:24:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.083 21:24:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.083 21:24:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:21.083 21:24:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.083 21:24:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:21.083 21:24:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.083 21:24:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.083 21:24:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.083 21:24:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.083 21:24:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.083 21:24:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.083 21:24:15 -- paths/export.sh@5 -- # export PATH 00:23:21.083 21:24:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.083 21:24:15 -- nvmf/common.sh@47 -- # : 0 00:23:21.083 21:24:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.083 21:24:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.083 21:24:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.083 21:24:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.083 21:24:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.083 21:24:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.083 21:24:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.083 21:24:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.083 21:24:15 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:21.083 21:24:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:21.083 21:24:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.083 21:24:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:21.083 21:24:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:21.083 21:24:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:21.083 21:24:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.083 21:24:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.083 21:24:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.083 21:24:15 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:21.083 21:24:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:21.083 21:24:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.083 21:24:15 -- common/autotest_common.sh@10 -- # set +x 00:23:27.650 21:24:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:27.650 21:24:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.650 21:24:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.650 21:24:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.650 21:24:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.650 21:24:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.650 21:24:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.651 21:24:20 -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.651 21:24:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.651 21:24:20 -- nvmf/common.sh@296 -- # e810=() 00:23:27.651 21:24:20 -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.651 21:24:20 -- nvmf/common.sh@297 -- # x722=() 00:23:27.651 21:24:20 -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.651 21:24:20 -- nvmf/common.sh@298 -- # mlx=() 00:23:27.651 21:24:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.651 21:24:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.651 21:24:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.651 21:24:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.651 21:24:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:27.651 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:27.651 21:24:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.651 21:24:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:27.651 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:27.651 21:24:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.651 21:24:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.651 21:24:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.651 21:24:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:27.651 Found net devices under 0000:27:00.0: cvl_0_0 00:23:27.651 21:24:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.651 21:24:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.651 21:24:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.651 21:24:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.651 21:24:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:27.651 Found net devices under 0000:27:00.1: cvl_0_1 00:23:27.651 21:24:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.651 21:24:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:27.651 21:24:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:27.651 21:24:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:27.651 21:24:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.651 21:24:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.651 21:24:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.651 21:24:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.651 21:24:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.651 21:24:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.651 21:24:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.651 21:24:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.651 21:24:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.651 21:24:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.651 21:24:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.651 21:24:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.651 21:24:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.651 21:24:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.651 21:24:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.651 21:24:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.651 21:24:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.651 21:24:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.651 21:24:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.651 21:24:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:23:27.651 00:23:27.651 --- 10.0.0.2 ping statistics --- 00:23:27.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.651 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:23:27.651 21:24:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.574 ms 00:23:27.651 00:23:27.651 --- 10.0.0.1 ping statistics --- 00:23:27.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.651 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:23:27.651 21:24:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.651 21:24:21 -- nvmf/common.sh@411 -- # return 0 00:23:27.651 21:24:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:27.651 21:24:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.651 21:24:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:27.651 21:24:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:27.651 21:24:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.651 21:24:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:27.651 21:24:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:27.651 21:24:21 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1517586 00:23:27.651 21:24:21 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:27.651 21:24:21 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:27.651 21:24:21 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1517586 00:23:27.651 21:24:21 -- common/autotest_common.sh@817 -- # '[' -z 1517586 ']' 00:23:27.651 21:24:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.651 21:24:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:27.651 21:24:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.651 21:24:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:27.651 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 21:24:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:27.912 21:24:21 -- common/autotest_common.sh@850 -- # return 0 00:23:27.912 21:24:21 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.912 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.912 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.912 21:24:21 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:27.912 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.912 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 Malloc0 00:23:27.912 21:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.912 21:24:22 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.912 21:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.912 21:24:22 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 21:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.912 21:24:22 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.912 21:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.912 21:24:22 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 21:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.912 21:24:22 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.912 21:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.912 21:24:22 -- common/autotest_common.sh@10 -- # set +x 00:23:27.912 21:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.912 21:24:22 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:27.912 21:24:22 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:00.039 Fuzzing completed. Shutting down the fuzz application 00:24:00.039 00:24:00.039 Dumping successful admin opcodes: 00:24:00.039 8, 9, 10, 24, 00:24:00.039 Dumping successful io opcodes: 00:24:00.039 0, 9, 00:24:00.039 NS: 0x200003aefec0 I/O qp, Total commands completed: 855884, total successful commands: 4977, random_seed: 2848128320 00:24:00.039 NS: 0x200003aefec0 admin qp, Total commands completed: 88989, total successful commands: 714, random_seed: 1773316608 00:24:00.039 21:24:52 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:00.039 Fuzzing completed. Shutting down the fuzz application 00:24:00.039 00:24:00.039 Dumping successful admin opcodes: 00:24:00.039 24, 00:24:00.039 Dumping successful io opcodes: 00:24:00.039 00:24:00.039 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 103229399 00:24:00.039 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 103327791 00:24:00.039 21:24:54 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.039 21:24:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.039 21:24:54 -- common/autotest_common.sh@10 -- # set +x 00:24:00.039 21:24:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.039 21:24:54 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:00.039 21:24:54 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:00.039 21:24:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:00.040 21:24:54 -- nvmf/common.sh@117 -- # sync 00:24:00.040 21:24:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.040 21:24:54 -- nvmf/common.sh@120 -- # set +e 00:24:00.040 21:24:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.040 21:24:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.040 rmmod nvme_tcp 00:24:00.040 rmmod nvme_fabrics 00:24:00.040 rmmod nvme_keyring 00:24:00.040 21:24:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.040 21:24:54 -- nvmf/common.sh@124 -- # set -e 00:24:00.040 21:24:54 -- nvmf/common.sh@125 -- # return 0 00:24:00.040 21:24:54 -- nvmf/common.sh@478 -- # '[' -n 1517586 ']' 00:24:00.040 21:24:54 -- nvmf/common.sh@479 -- # killprocess 1517586 00:24:00.040 21:24:54 -- common/autotest_common.sh@936 -- # '[' -z 1517586 ']' 00:24:00.040 21:24:54 -- common/autotest_common.sh@940 -- # kill -0 1517586 00:24:00.040 21:24:54 -- common/autotest_common.sh@941 -- # uname 00:24:00.040 21:24:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.040 21:24:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1517586 00:24:00.040 21:24:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:00.040 21:24:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:00.040 21:24:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1517586' 00:24:00.040 killing process with pid 1517586 00:24:00.040 21:24:54 -- common/autotest_common.sh@955 -- # kill 1517586 00:24:00.040 21:24:54 -- common/autotest_common.sh@960 -- # wait 1517586 00:24:00.610 21:24:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:00.610 21:24:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:00.610 21:24:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:00.610 21:24:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.610 21:24:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.610 21:24:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.610 21:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.610 21:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.515 21:24:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.775 21:24:56 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:02.775 00:24:02.775 real 0m41.684s 00:24:02.775 user 0m58.255s 00:24:02.775 sys 0m12.732s 00:24:02.775 21:24:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:02.775 21:24:56 -- common/autotest_common.sh@10 -- # set +x 00:24:02.775 ************************************ 00:24:02.775 END TEST nvmf_fuzz 00:24:02.775 ************************************ 00:24:02.775 21:24:56 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:02.775 21:24:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:02.775 21:24:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.775 21:24:56 -- common/autotest_common.sh@10 -- # set +x 00:24:02.775 ************************************ 00:24:02.775 START TEST nvmf_multiconnection 00:24:02.775 ************************************ 00:24:02.775 21:24:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:03.034 * Looking for test storage... 00:24:03.034 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:24:03.034 21:24:57 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.034 21:24:57 -- nvmf/common.sh@7 -- # uname -s 00:24:03.034 21:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.034 21:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.034 21:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.034 21:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.034 21:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.034 21:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.034 21:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.034 21:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.034 21:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.034 21:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.034 21:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:03.034 21:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:24:03.034 21:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.034 21:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.034 21:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:03.034 21:24:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.034 21:24:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:03.034 21:24:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.034 21:24:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.034 21:24:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.034 21:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.034 21:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.034 21:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.034 21:24:57 -- paths/export.sh@5 -- # export PATH 00:24:03.034 21:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.034 21:24:57 -- nvmf/common.sh@47 -- # : 0 00:24:03.034 21:24:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.034 21:24:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.034 21:24:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.034 21:24:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.034 21:24:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.034 21:24:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.034 21:24:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.034 21:24:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.034 21:24:57 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.034 21:24:57 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.034 21:24:57 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:03.034 21:24:57 -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:03.034 21:24:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:03.034 21:24:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.034 21:24:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:03.034 21:24:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:03.034 21:24:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:03.034 21:24:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.034 21:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.034 21:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.034 21:24:57 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:24:03.034 21:24:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:03.034 21:24:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.034 21:24:57 -- common/autotest_common.sh@10 -- # set +x 00:24:08.317 21:25:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.317 21:25:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.317 21:25:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.317 21:25:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.317 21:25:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.317 21:25:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.317 21:25:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.317 21:25:02 -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.317 21:25:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.317 21:25:02 -- nvmf/common.sh@296 -- # e810=() 00:24:08.317 21:25:02 -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.317 21:25:02 -- nvmf/common.sh@297 -- # x722=() 00:24:08.317 21:25:02 -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.318 21:25:02 -- nvmf/common.sh@298 -- # mlx=() 00:24:08.318 21:25:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.318 21:25:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.318 21:25:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.318 21:25:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.318 21:25:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:08.318 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:08.318 21:25:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.318 21:25:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:08.318 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:08.318 21:25:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.318 21:25:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.318 21:25:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.318 21:25:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:08.318 Found net devices under 0000:27:00.0: cvl_0_0 00:24:08.318 21:25:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.318 21:25:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.318 21:25:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.318 21:25:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.318 21:25:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:08.318 Found net devices under 0000:27:00.1: cvl_0_1 00:24:08.318 21:25:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.318 21:25:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:08.318 21:25:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:08.318 21:25:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:08.318 21:25:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.318 21:25:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.318 21:25:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.318 21:25:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.318 21:25:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.318 21:25:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.318 21:25:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.318 21:25:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.318 21:25:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.318 21:25:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.318 21:25:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.318 21:25:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.318 21:25:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.318 21:25:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.318 21:25:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.318 21:25:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.318 21:25:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.578 21:25:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.578 21:25:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.578 21:25:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:24:08.578 00:24:08.578 --- 10.0.0.2 ping statistics --- 00:24:08.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.578 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:24:08.578 21:25:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.562 ms 00:24:08.578 00:24:08.578 --- 10.0.0.1 ping statistics --- 00:24:08.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.578 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:24:08.578 21:25:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.578 21:25:02 -- nvmf/common.sh@411 -- # return 0 00:24:08.578 21:25:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:08.578 21:25:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.578 21:25:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:08.578 21:25:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:08.578 21:25:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.578 21:25:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:08.578 21:25:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:08.578 21:25:02 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:08.578 21:25:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:08.578 21:25:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:08.578 21:25:02 -- common/autotest_common.sh@10 -- # set +x 00:24:08.578 21:25:02 -- nvmf/common.sh@470 -- # nvmfpid=1527934 00:24:08.578 21:25:02 -- nvmf/common.sh@471 -- # waitforlisten 1527934 00:24:08.578 21:25:02 -- common/autotest_common.sh@817 -- # '[' -z 1527934 ']' 00:24:08.578 21:25:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.578 21:25:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.578 21:25:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.578 21:25:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.578 21:25:02 -- common/autotest_common.sh@10 -- # set +x 00:24:08.578 21:25:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.578 [2024-04-23 21:25:02.806856] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:24:08.578 [2024-04-23 21:25:02.806990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.837 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.838 [2024-04-23 21:25:02.947693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.838 [2024-04-23 21:25:03.044402] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.838 [2024-04-23 21:25:03.044451] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.838 [2024-04-23 21:25:03.044468] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.838 [2024-04-23 21:25:03.044477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.838 [2024-04-23 21:25:03.044485] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.838 [2024-04-23 21:25:03.044573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.838 [2024-04-23 21:25:03.044669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.838 [2024-04-23 21:25:03.044747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.838 [2024-04-23 21:25:03.044758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.404 21:25:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.404 21:25:03 -- common/autotest_common.sh@850 -- # return 0 00:24:09.404 21:25:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:09.404 21:25:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:09.404 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 21:25:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.404 21:25:03 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.404 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.404 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 [2024-04-23 21:25:03.553403] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.404 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.404 21:25:03 -- target/multiconnection.sh@21 -- # seq 1 11 00:24:09.404 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.404 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:09.404 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.404 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 Malloc1 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 [2024-04-23 21:25:03.628201] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.405 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 Malloc2 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.405 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:09.405 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.405 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.665 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 Malloc3 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.665 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 Malloc4 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.665 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 Malloc5 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.665 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.665 Malloc6 00:24:09.665 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.665 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:09.665 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.665 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.666 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.666 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:09.666 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.666 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.666 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.666 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:09.666 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.666 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.666 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.666 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.666 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:09.666 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.666 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 Malloc7 00:24:09.926 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:03 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:09.926 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:03 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:09.926 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:03 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:09.926 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:03 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.926 21:25:03 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:09.926 21:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:03 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 Malloc8 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.926 21:25:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 Malloc9 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.926 21:25:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 Malloc10 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:09.926 21:25:04 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 Malloc11 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:09.926 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.926 21:25:04 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:09.926 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.926 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:10.186 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.186 21:25:04 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:10.186 21:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.186 21:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:10.186 21:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.186 21:25:04 -- target/multiconnection.sh@28 -- # seq 1 11 00:24:10.186 21:25:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.186 21:25:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:11.563 21:25:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:11.563 21:25:05 -- common/autotest_common.sh@1184 -- # local i=0 00:24:11.563 21:25:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:11.563 21:25:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:11.563 21:25:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:13.467 21:25:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:13.467 21:25:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:13.467 21:25:07 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:24:13.726 21:25:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:13.726 21:25:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.726 21:25:07 -- common/autotest_common.sh@1194 -- # return 0 00:24:13.726 21:25:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.726 21:25:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:15.104 21:25:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:15.104 21:25:09 -- common/autotest_common.sh@1184 -- # local i=0 00:24:15.104 21:25:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:15.104 21:25:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:15.104 21:25:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:17.009 21:25:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:17.009 21:25:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:17.009 21:25:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:24:17.009 21:25:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:17.009 21:25:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.009 21:25:11 -- common/autotest_common.sh@1194 -- # return 0 00:24:17.009 21:25:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.009 21:25:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:18.915 21:25:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:18.915 21:25:12 -- common/autotest_common.sh@1184 -- # local i=0 00:24:18.915 21:25:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.916 21:25:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:18.916 21:25:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:20.825 21:25:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:20.825 21:25:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:20.825 21:25:14 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:24:20.825 21:25:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:20.825 21:25:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.825 21:25:14 -- common/autotest_common.sh@1194 -- # return 0 00:24:20.825 21:25:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.825 21:25:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:22.205 21:25:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:22.205 21:25:16 -- common/autotest_common.sh@1184 -- # local i=0 00:24:22.205 21:25:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.205 21:25:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:22.205 21:25:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:24.112 21:25:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:24.112 21:25:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:24.112 21:25:18 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:24:24.112 21:25:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:24.112 21:25:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.112 21:25:18 -- common/autotest_common.sh@1194 -- # return 0 00:24:24.112 21:25:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.112 21:25:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:26.020 21:25:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:26.020 21:25:19 -- common/autotest_common.sh@1184 -- # local i=0 00:24:26.020 21:25:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.020 21:25:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:26.020 21:25:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:27.930 21:25:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:27.930 21:25:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:27.930 21:25:21 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:24:27.930 21:25:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:27.930 21:25:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.930 21:25:21 -- common/autotest_common.sh@1194 -- # return 0 00:24:27.930 21:25:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.930 21:25:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:29.314 21:25:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:29.314 21:25:23 -- common/autotest_common.sh@1184 -- # local i=0 00:24:29.314 21:25:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.314 21:25:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:29.314 21:25:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:31.220 21:25:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:31.220 21:25:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:31.220 21:25:25 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:24:31.480 21:25:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:31.480 21:25:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.480 21:25:25 -- common/autotest_common.sh@1194 -- # return 0 00:24:31.480 21:25:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.480 21:25:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:33.388 21:25:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:33.388 21:25:27 -- common/autotest_common.sh@1184 -- # local i=0 00:24:33.388 21:25:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.388 21:25:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:33.388 21:25:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:35.298 21:25:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:35.298 21:25:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:35.298 21:25:29 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:24:35.298 21:25:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:35.298 21:25:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.298 21:25:29 -- common/autotest_common.sh@1194 -- # return 0 00:24:35.298 21:25:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.298 21:25:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:36.771 21:25:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:36.771 21:25:30 -- common/autotest_common.sh@1184 -- # local i=0 00:24:36.771 21:25:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.771 21:25:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:36.771 21:25:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:39.307 21:25:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:39.307 21:25:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:39.307 21:25:32 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:24:39.307 21:25:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:39.307 21:25:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:39.307 21:25:32 -- common/autotest_common.sh@1194 -- # return 0 00:24:39.307 21:25:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:39.307 21:25:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:40.682 21:25:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:40.682 21:25:34 -- common/autotest_common.sh@1184 -- # local i=0 00:24:40.682 21:25:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.682 21:25:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:40.682 21:25:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:43.224 21:25:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:43.224 21:25:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:43.224 21:25:36 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:24:43.224 21:25:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:43.224 21:25:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.224 21:25:36 -- common/autotest_common.sh@1194 -- # return 0 00:24:43.224 21:25:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.224 21:25:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:44.599 21:25:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:44.599 21:25:38 -- common/autotest_common.sh@1184 -- # local i=0 00:24:44.599 21:25:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.599 21:25:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:44.599 21:25:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:46.506 21:25:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:46.506 21:25:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:46.506 21:25:40 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:24:46.506 21:25:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:46.506 21:25:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.506 21:25:40 -- common/autotest_common.sh@1194 -- # return 0 00:24:46.506 21:25:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.506 21:25:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:48.414 21:25:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:48.414 21:25:42 -- common/autotest_common.sh@1184 -- # local i=0 00:24:48.414 21:25:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.414 21:25:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:48.414 21:25:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:50.952 21:25:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:50.952 21:25:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:50.952 21:25:44 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:24:50.952 21:25:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:50.952 21:25:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.952 21:25:44 -- common/autotest_common.sh@1194 -- # return 0 00:24:50.952 21:25:44 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:50.952 [global] 00:24:50.952 thread=1 00:24:50.952 invalidate=1 00:24:50.952 rw=read 00:24:50.952 time_based=1 00:24:50.952 runtime=10 00:24:50.952 ioengine=libaio 00:24:50.952 direct=1 00:24:50.952 bs=262144 00:24:50.952 iodepth=64 00:24:50.952 norandommap=1 00:24:50.952 numjobs=1 00:24:50.952 00:24:50.952 [job0] 00:24:50.952 filename=/dev/nvme0n1 00:24:50.952 [job1] 00:24:50.952 filename=/dev/nvme10n1 00:24:50.952 [job2] 00:24:50.952 filename=/dev/nvme1n1 00:24:50.952 [job3] 00:24:50.952 filename=/dev/nvme2n1 00:24:50.952 [job4] 00:24:50.952 filename=/dev/nvme3n1 00:24:50.952 [job5] 00:24:50.952 filename=/dev/nvme4n1 00:24:50.952 [job6] 00:24:50.952 filename=/dev/nvme5n1 00:24:50.952 [job7] 00:24:50.952 filename=/dev/nvme6n1 00:24:50.952 [job8] 00:24:50.952 filename=/dev/nvme7n1 00:24:50.952 [job9] 00:24:50.952 filename=/dev/nvme8n1 00:24:50.952 [job10] 00:24:50.952 filename=/dev/nvme9n1 00:24:50.952 Could not set queue depth (nvme0n1) 00:24:50.952 Could not set queue depth (nvme10n1) 00:24:50.952 Could not set queue depth (nvme1n1) 00:24:50.952 Could not set queue depth (nvme2n1) 00:24:50.952 Could not set queue depth (nvme3n1) 00:24:50.952 Could not set queue depth (nvme4n1) 00:24:50.952 Could not set queue depth (nvme5n1) 00:24:50.952 Could not set queue depth (nvme6n1) 00:24:50.953 Could not set queue depth (nvme7n1) 00:24:50.953 Could not set queue depth (nvme8n1) 00:24:50.953 Could not set queue depth (nvme9n1) 00:24:51.212 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:51.212 fio-3.35 00:24:51.212 Starting 11 threads 00:25:03.418 00:25:03.418 job0: (groupid=0, jobs=1): err= 0: pid=1536368: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=434, BW=109MiB/s (114MB/s)(1095MiB/10086msec) 00:25:03.418 slat (usec): min=12, max=95488, avg=2281.07, stdev=6540.36 00:25:03.418 clat (msec): min=31, max=310, avg=144.96, stdev=46.24 00:25:03.418 lat (msec): min=31, max=310, avg=147.24, stdev=47.25 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 74], 5.00th=[ 83], 10.00th=[ 88], 20.00th=[ 102], 00:25:03.418 | 30.00th=[ 115], 40.00th=[ 127], 50.00th=[ 138], 60.00th=[ 150], 00:25:03.418 | 70.00th=[ 171], 80.00th=[ 188], 90.00th=[ 213], 95.00th=[ 224], 00:25:03.418 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 305], 00:25:03.418 | 99.99th=[ 309] 00:25:03.418 bw ( KiB/s): min=62976, max=159232, per=5.74%, avg=110504.85, stdev=30801.00, samples=20 00:25:03.418 iops : min= 246, max= 622, avg=431.65, stdev=120.32, samples=20 00:25:03.418 lat (msec) : 50=0.27%, 100=18.12%, 250=80.00%, 500=1.60% 00:25:03.418 cpu : usr=0.13%, sys=1.43%, ctx=808, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=4381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job1: (groupid=0, jobs=1): err= 0: pid=1536369: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=1144, BW=286MiB/s (300MB/s)(2870MiB/10031msec) 00:25:03.418 slat (usec): min=7, max=141234, avg=851.16, stdev=2635.11 00:25:03.418 clat (msec): min=5, max=231, avg=55.04, stdev=23.68 00:25:03.418 lat (msec): min=5, max=231, avg=55.89, stdev=23.95 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:25:03.418 | 30.00th=[ 36], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 63], 00:25:03.418 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 80], 95.00th=[ 87], 00:25:03.418 | 99.00th=[ 111], 99.50th=[ 192], 99.90th=[ 220], 99.95th=[ 220], 00:25:03.418 | 99.99th=[ 232] 00:25:03.418 bw ( KiB/s): min=156472, max=519680, per=15.17%, avg=292154.30, stdev=96758.54, samples=20 00:25:03.418 iops : min= 611, max= 2030, avg=1141.15, stdev=377.97, samples=20 00:25:03.418 lat (msec) : 10=0.06%, 20=2.47%, 50=38.80%, 100=57.28%, 250=1.39% 00:25:03.418 cpu : usr=0.12%, sys=2.50%, ctx=2332, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=11478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job2: (groupid=0, jobs=1): err= 0: pid=1536370: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=722, BW=181MiB/s (189MB/s)(1820MiB/10082msec) 00:25:03.418 slat (usec): min=5, max=110223, avg=920.06, stdev=3978.15 00:25:03.418 clat (msec): min=2, max=305, avg=87.64, stdev=52.20 00:25:03.418 lat (msec): min=2, max=323, avg=88.56, stdev=52.80 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 36], 00:25:03.418 | 30.00th=[ 51], 40.00th=[ 71], 50.00th=[ 86], 60.00th=[ 97], 00:25:03.418 | 70.00th=[ 110], 80.00th=[ 126], 90.00th=[ 153], 95.00th=[ 192], 00:25:03.418 | 99.00th=[ 241], 99.50th=[ 259], 99.90th=[ 271], 99.95th=[ 275], 00:25:03.418 | 99.99th=[ 305] 00:25:03.418 bw ( KiB/s): min=77312, max=431616, per=9.59%, avg=184738.70, stdev=88478.93, samples=20 00:25:03.418 iops : min= 302, max= 1686, avg=721.50, stdev=345.67, samples=20 00:25:03.418 lat (msec) : 4=0.11%, 10=1.25%, 20=4.38%, 50=23.97%, 100=33.24% 00:25:03.418 lat (msec) : 250=36.30%, 500=0.76% 00:25:03.418 cpu : usr=0.17%, sys=1.85%, ctx=1676, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=7281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job3: (groupid=0, jobs=1): err= 0: pid=1536371: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=564, BW=141MiB/s (148MB/s)(1425MiB/10089msec) 00:25:03.418 slat (usec): min=7, max=162621, avg=1181.46, stdev=7140.00 00:25:03.418 clat (usec): min=975, max=388030, avg=112013.38, stdev=73157.24 00:25:03.418 lat (usec): min=994, max=420673, avg=113194.84, stdev=74356.18 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 21], 20.00th=[ 37], 00:25:03.418 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 112], 60.00th=[ 136], 00:25:03.418 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 215], 95.00th=[ 230], 00:25:03.418 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 347], 99.95th=[ 384], 00:25:03.418 | 99.99th=[ 388] 00:25:03.418 bw ( KiB/s): min=61440, max=328192, per=7.49%, avg=144281.45, stdev=82979.32, samples=20 00:25:03.418 iops : min= 240, max= 1282, avg=563.55, stdev=324.17, samples=20 00:25:03.418 lat (usec) : 1000=0.04% 00:25:03.418 lat (msec) : 2=0.72%, 4=0.04%, 10=2.16%, 20=6.72%, 50=15.98% 00:25:03.418 lat (msec) : 100=20.88%, 250=51.44%, 500=2.04% 00:25:03.418 cpu : usr=0.15%, sys=1.40%, ctx=1456, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=5700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job4: (groupid=0, jobs=1): err= 0: pid=1536372: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=667, BW=167MiB/s (175MB/s)(1674MiB/10033msec) 00:25:03.418 slat (usec): min=7, max=195369, avg=881.97, stdev=5693.89 00:25:03.418 clat (usec): min=1510, max=407317, avg=94951.48, stdev=70188.47 00:25:03.418 lat (usec): min=1550, max=407342, avg=95833.45, stdev=71017.34 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 34], 00:25:03.418 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 73], 60.00th=[ 92], 00:25:03.418 | 70.00th=[ 130], 80.00th=[ 157], 90.00th=[ 209], 95.00th=[ 232], 00:25:03.418 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:25:03.418 | 99.99th=[ 409] 00:25:03.418 bw ( KiB/s): min=71168, max=274906, per=8.81%, avg=169740.65, stdev=60264.38, samples=20 00:25:03.418 iops : min= 278, max= 1073, avg=662.95, stdev=235.36, samples=20 00:25:03.418 lat (msec) : 2=0.07%, 4=1.40%, 10=3.52%, 20=6.38%, 50=20.43% 00:25:03.418 lat (msec) : 100=31.62%, 250=34.04%, 500=2.54% 00:25:03.418 cpu : usr=0.18%, sys=1.80%, ctx=1654, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=6696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job5: (groupid=0, jobs=1): err= 0: pid=1536373: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=567, BW=142MiB/s (149MB/s)(1431MiB/10084msec) 00:25:03.418 slat (usec): min=7, max=173089, avg=960.08, stdev=5379.50 00:25:03.418 clat (usec): min=1203, max=371691, avg=111725.88, stdev=70079.49 00:25:03.418 lat (usec): min=1234, max=371719, avg=112685.96, stdev=70683.79 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 35], 00:25:03.418 | 30.00th=[ 70], 40.00th=[ 91], 50.00th=[ 116], 60.00th=[ 131], 00:25:03.418 | 70.00th=[ 146], 80.00th=[ 180], 90.00th=[ 209], 95.00th=[ 224], 00:25:03.418 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 317], 99.95th=[ 347], 00:25:03.418 | 99.99th=[ 372] 00:25:03.418 bw ( KiB/s): min=87040, max=235008, per=7.52%, avg=144820.65, stdev=42893.60, samples=20 00:25:03.418 iops : min= 340, max= 918, avg=565.60, stdev=167.54, samples=20 00:25:03.418 lat (msec) : 2=0.65%, 4=0.84%, 10=6.59%, 20=5.68%, 50=9.51% 00:25:03.418 lat (msec) : 100=20.48%, 250=54.69%, 500=1.57% 00:25:03.418 cpu : usr=0.15%, sys=1.51%, ctx=1638, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=5723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job6: (groupid=0, jobs=1): err= 0: pid=1536374: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=768, BW=192MiB/s (201MB/s)(1928MiB/10037msec) 00:25:03.418 slat (usec): min=8, max=206306, avg=1254.47, stdev=4279.75 00:25:03.418 clat (msec): min=9, max=282, avg=82.00, stdev=36.82 00:25:03.418 lat (msec): min=9, max=450, avg=83.25, stdev=37.43 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:25:03.418 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 83], 00:25:03.418 | 70.00th=[ 97], 80.00th=[ 114], 90.00th=[ 133], 95.00th=[ 144], 00:25:03.418 | 99.00th=[ 241], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 275], 00:25:03.418 | 99.99th=[ 284] 00:25:03.418 bw ( KiB/s): min=111616, max=308224, per=10.16%, avg=195722.95, stdev=66445.56, samples=20 00:25:03.418 iops : min= 436, max= 1204, avg=764.45, stdev=259.52, samples=20 00:25:03.418 lat (msec) : 10=0.04%, 20=0.52%, 50=14.30%, 100=57.70%, 250=26.66% 00:25:03.418 lat (msec) : 500=0.78% 00:25:03.418 cpu : usr=0.15%, sys=2.38%, ctx=1519, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=7711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job7: (groupid=0, jobs=1): err= 0: pid=1536376: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=834, BW=209MiB/s (219MB/s)(2102MiB/10081msec) 00:25:03.418 slat (usec): min=7, max=161805, avg=706.28, stdev=4602.52 00:25:03.418 clat (usec): min=1127, max=349110, avg=75965.18, stdev=51057.14 00:25:03.418 lat (usec): min=1153, max=392366, avg=76671.47, stdev=51676.34 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 36], 00:25:03.418 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 80], 00:25:03.418 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 138], 95.00th=[ 192], 00:25:03.418 | 99.00th=[ 247], 99.50th=[ 266], 99.90th=[ 275], 99.95th=[ 279], 00:25:03.418 | 99.99th=[ 351] 00:25:03.418 bw ( KiB/s): min=81920, max=346624, per=11.09%, avg=213573.10, stdev=58519.57, samples=20 00:25:03.418 iops : min= 320, max= 1354, avg=834.20, stdev=228.64, samples=20 00:25:03.418 lat (msec) : 2=0.13%, 4=0.58%, 10=3.47%, 20=6.68%, 50=20.36% 00:25:03.418 lat (msec) : 100=48.19%, 250=19.59%, 500=0.99% 00:25:03.418 cpu : usr=0.15%, sys=2.16%, ctx=1936, majf=0, minf=3597 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=8408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job8: (groupid=0, jobs=1): err= 0: pid=1536380: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=400, BW=100MiB/s (105MB/s)(1012MiB/10092msec) 00:25:03.418 slat (usec): min=12, max=160747, avg=2291.10, stdev=8448.32 00:25:03.418 clat (msec): min=47, max=354, avg=157.21, stdev=47.01 00:25:03.418 lat (msec): min=48, max=354, avg=159.50, stdev=48.06 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 82], 5.00th=[ 94], 10.00th=[ 101], 20.00th=[ 116], 00:25:03.418 | 30.00th=[ 128], 40.00th=[ 138], 50.00th=[ 148], 60.00th=[ 167], 00:25:03.418 | 70.00th=[ 184], 80.00th=[ 203], 90.00th=[ 220], 95.00th=[ 236], 00:25:03.418 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 342], 99.95th=[ 355], 00:25:03.418 | 99.99th=[ 355] 00:25:03.418 bw ( KiB/s): min=63488, max=148480, per=5.29%, avg=101939.30, stdev=27430.37, samples=20 00:25:03.418 iops : min= 248, max= 580, avg=398.15, stdev=107.16, samples=20 00:25:03.418 lat (msec) : 50=0.40%, 100=9.54%, 250=86.78%, 500=3.29% 00:25:03.418 cpu : usr=0.11%, sys=1.38%, ctx=886, majf=0, minf=4097 00:25:03.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:03.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.418 issued rwts: total=4046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.418 job9: (groupid=0, jobs=1): err= 0: pid=1536381: Tue Apr 23 21:25:55 2024 00:25:03.418 read: IOPS=664, BW=166MiB/s (174MB/s)(1675MiB/10079msec) 00:25:03.418 slat (usec): min=8, max=148692, avg=963.13, stdev=5731.74 00:25:03.418 clat (msec): min=2, max=333, avg=95.26, stdev=67.56 00:25:03.418 lat (msec): min=2, max=370, avg=96.22, stdev=68.52 00:25:03.418 clat percentiles (msec): 00:25:03.418 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 37], 00:25:03.418 | 30.00th=[ 50], 40.00th=[ 62], 50.00th=[ 75], 60.00th=[ 91], 00:25:03.418 | 70.00th=[ 116], 80.00th=[ 171], 90.00th=[ 203], 95.00th=[ 220], 00:25:03.418 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 334], 00:25:03.418 | 99.99th=[ 334] 00:25:03.419 bw ( KiB/s): min=63488, max=294912, per=8.82%, avg=169829.75, stdev=64984.77, samples=20 00:25:03.419 iops : min= 248, max= 1152, avg=663.35, stdev=253.86, samples=20 00:25:03.419 lat (msec) : 4=0.42%, 10=3.57%, 20=5.69%, 50=20.63%, 100=33.08% 00:25:03.419 lat (msec) : 250=35.20%, 500=1.42% 00:25:03.419 cpu : usr=0.10%, sys=1.76%, ctx=1665, majf=0, minf=4097 00:25:03.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:03.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.419 issued rwts: total=6699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.419 job10: (groupid=0, jobs=1): err= 0: pid=1536382: Tue Apr 23 21:25:55 2024 00:25:03.419 read: IOPS=776, BW=194MiB/s (204MB/s)(1949MiB/10040msec) 00:25:03.419 slat (usec): min=8, max=38694, avg=1110.65, stdev=3294.97 00:25:03.419 clat (msec): min=17, max=279, avg=81.25, stdev=42.84 00:25:03.419 lat (msec): min=17, max=279, avg=82.36, stdev=43.27 00:25:03.419 clat percentiles (msec): 00:25:03.419 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 36], 00:25:03.419 | 30.00th=[ 52], 40.00th=[ 64], 50.00th=[ 78], 60.00th=[ 89], 00:25:03.419 | 70.00th=[ 102], 80.00th=[ 116], 90.00th=[ 136], 95.00th=[ 153], 00:25:03.419 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 279], 00:25:03.419 | 99.99th=[ 279] 00:25:03.419 bw ( KiB/s): min=110592, max=476672, per=10.28%, avg=197930.90, stdev=92938.19, samples=20 00:25:03.419 iops : min= 432, max= 1862, avg=773.10, stdev=363.05, samples=20 00:25:03.419 lat (msec) : 20=0.18%, 50=29.04%, 100=39.62%, 250=30.86%, 500=0.31% 00:25:03.419 cpu : usr=0.13%, sys=2.15%, ctx=1655, majf=0, minf=4097 00:25:03.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:03.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:03.419 issued rwts: total=7797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:03.419 00:25:03.419 Run status group 0 (all jobs): 00:25:03.419 READ: bw=1881MiB/s (1972MB/s), 100MiB/s-286MiB/s (105MB/s-300MB/s), io=18.5GiB (19.9GB), run=10031-10092msec 00:25:03.419 00:25:03.419 Disk stats (read/write): 00:25:03.419 nvme0n1: ios=8711/0, merge=0/0, ticks=1250653/0, in_queue=1250653, util=96.12% 00:25:03.419 nvme10n1: ios=22438/0, merge=0/0, ticks=1226225/0, in_queue=1226225, util=96.31% 00:25:03.419 nvme1n1: ios=14502/0, merge=0/0, ticks=1254277/0, in_queue=1254277, util=96.86% 00:25:03.419 nvme2n1: ios=11339/0, merge=0/0, ticks=1251920/0, in_queue=1251920, util=97.09% 00:25:03.419 nvme3n1: ios=12830/0, merge=0/0, ticks=1224991/0, in_queue=1224991, util=97.15% 00:25:03.419 nvme4n1: ios=11350/0, merge=0/0, ticks=1251288/0, in_queue=1251288, util=97.74% 00:25:03.419 nvme5n1: ios=14928/0, merge=0/0, ticks=1219758/0, in_queue=1219758, util=97.92% 00:25:03.419 nvme6n1: ios=16743/0, merge=0/0, ticks=1256758/0, in_queue=1256758, util=98.17% 00:25:03.419 nvme7n1: ios=8011/0, merge=0/0, ticks=1239365/0, in_queue=1239365, util=98.78% 00:25:03.419 nvme8n1: ios=13342/0, merge=0/0, ticks=1251967/0, in_queue=1251967, util=99.03% 00:25:03.419 nvme9n1: ios=15128/0, merge=0/0, ticks=1223305/0, in_queue=1223305, util=99.21% 00:25:03.419 21:25:55 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:03.419 [global] 00:25:03.419 thread=1 00:25:03.419 invalidate=1 00:25:03.419 rw=randwrite 00:25:03.419 time_based=1 00:25:03.419 runtime=10 00:25:03.419 ioengine=libaio 00:25:03.419 direct=1 00:25:03.419 bs=262144 00:25:03.419 iodepth=64 00:25:03.419 norandommap=1 00:25:03.419 numjobs=1 00:25:03.419 00:25:03.419 [job0] 00:25:03.419 filename=/dev/nvme0n1 00:25:03.419 [job1] 00:25:03.419 filename=/dev/nvme10n1 00:25:03.419 [job2] 00:25:03.419 filename=/dev/nvme1n1 00:25:03.419 [job3] 00:25:03.419 filename=/dev/nvme2n1 00:25:03.419 [job4] 00:25:03.419 filename=/dev/nvme3n1 00:25:03.419 [job5] 00:25:03.419 filename=/dev/nvme4n1 00:25:03.419 [job6] 00:25:03.419 filename=/dev/nvme5n1 00:25:03.419 [job7] 00:25:03.419 filename=/dev/nvme6n1 00:25:03.419 [job8] 00:25:03.419 filename=/dev/nvme7n1 00:25:03.419 [job9] 00:25:03.419 filename=/dev/nvme8n1 00:25:03.419 [job10] 00:25:03.419 filename=/dev/nvme9n1 00:25:03.419 Could not set queue depth (nvme0n1) 00:25:03.419 Could not set queue depth (nvme10n1) 00:25:03.419 Could not set queue depth (nvme1n1) 00:25:03.419 Could not set queue depth (nvme2n1) 00:25:03.419 Could not set queue depth (nvme3n1) 00:25:03.419 Could not set queue depth (nvme4n1) 00:25:03.419 Could not set queue depth (nvme5n1) 00:25:03.419 Could not set queue depth (nvme6n1) 00:25:03.419 Could not set queue depth (nvme7n1) 00:25:03.419 Could not set queue depth (nvme8n1) 00:25:03.419 Could not set queue depth (nvme9n1) 00:25:03.419 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:03.419 fio-3.35 00:25:03.419 Starting 11 threads 00:25:13.399 00:25:13.399 job0: (groupid=0, jobs=1): err= 0: pid=1538572: Tue Apr 23 21:26:06 2024 00:25:13.399 write: IOPS=751, BW=188MiB/s (197MB/s)(1890MiB/10056msec); 0 zone resets 00:25:13.399 slat (usec): min=15, max=24697, avg=1320.97, stdev=2353.52 00:25:13.399 clat (msec): min=26, max=134, avg=83.79, stdev=23.49 00:25:13.399 lat (msec): min=26, max=134, avg=85.11, stdev=23.76 00:25:13.399 clat percentiles (msec): 00:25:13.399 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:25:13.399 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 90], 60.00th=[ 93], 00:25:13.399 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 125], 95.00th=[ 130], 00:25:13.400 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 134], 99.95th=[ 134], 00:25:13.400 | 99.99th=[ 136] 00:25:13.400 bw ( KiB/s): min=124928, max=276992, per=13.47%, avg=191935.90, stdev=54766.11, samples=20 00:25:13.400 iops : min= 488, max= 1082, avg=749.70, stdev=213.99, samples=20 00:25:13.400 lat (msec) : 50=0.21%, 100=85.11%, 250=14.68% 00:25:13.400 cpu : usr=2.29%, sys=1.76%, ctx=1939, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,7560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job1: (groupid=0, jobs=1): err= 0: pid=1538586: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=521, BW=130MiB/s (137MB/s)(1320MiB/10130msec); 0 zone resets 00:25:13.400 slat (usec): min=18, max=96731, avg=1839.52, stdev=3513.40 00:25:13.400 clat (msec): min=4, max=251, avg=120.91, stdev=22.87 00:25:13.400 lat (msec): min=11, max=251, avg=122.75, stdev=22.97 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 38], 5.00th=[ 91], 10.00th=[ 95], 20.00th=[ 100], 00:25:13.400 | 30.00th=[ 122], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:25:13.400 | 70.00th=[ 134], 80.00th=[ 136], 90.00th=[ 136], 95.00th=[ 138], 00:25:13.400 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 245], 99.95th=[ 245], 00:25:13.400 | 99.99th=[ 253] 00:25:13.400 bw ( KiB/s): min=120832, max=196608, per=9.37%, avg=133529.60, stdev=21051.14, samples=20 00:25:13.400 iops : min= 472, max= 768, avg=521.60, stdev=82.23, samples=20 00:25:13.400 lat (msec) : 10=0.02%, 20=0.23%, 50=1.44%, 100=22.86%, 250=75.42% 00:25:13.400 lat (msec) : 500=0.04% 00:25:13.400 cpu : usr=1.83%, sys=1.60%, ctx=1508, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,5280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job2: (groupid=0, jobs=1): err= 0: pid=1538589: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=466, BW=117MiB/s (122MB/s)(1185MiB/10154msec); 0 zone resets 00:25:13.400 slat (usec): min=18, max=62982, avg=2059.54, stdev=3862.74 00:25:13.400 clat (msec): min=23, max=301, avg=134.94, stdev=21.64 00:25:13.400 lat (msec): min=23, max=301, avg=137.00, stdev=21.64 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 81], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 124], 00:25:13.400 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:25:13.400 | 70.00th=[ 133], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:25:13.400 | 99.00th=[ 186], 99.50th=[ 245], 99.90th=[ 292], 99.95th=[ 292], 00:25:13.400 | 99.99th=[ 300] 00:25:13.400 bw ( KiB/s): min=98304, max=130560, per=8.40%, avg=119705.60, stdev=11550.17, samples=20 00:25:13.400 iops : min= 384, max= 510, avg=467.60, stdev=45.12, samples=20 00:25:13.400 lat (msec) : 50=0.42%, 100=1.84%, 250=97.28%, 500=0.46% 00:25:13.400 cpu : usr=1.51%, sys=1.62%, ctx=1300, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job3: (groupid=0, jobs=1): err= 0: pid=1538590: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=462, BW=116MiB/s (121MB/s)(1173MiB/10144msec); 0 zone resets 00:25:13.400 slat (usec): min=19, max=135219, avg=2128.05, stdev=4515.66 00:25:13.400 clat (msec): min=23, max=300, avg=136.23, stdev=23.96 00:25:13.400 lat (msec): min=23, max=300, avg=138.36, stdev=23.91 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 61], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:25:13.400 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 132], 00:25:13.400 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 165], 00:25:13.400 | 99.00th=[ 230], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:25:13.400 | 99.99th=[ 300] 00:25:13.400 bw ( KiB/s): min=100352, max=129024, per=8.32%, avg=118463.50, stdev=11254.86, samples=20 00:25:13.400 iops : min= 392, max= 504, avg=462.70, stdev=43.94, samples=20 00:25:13.400 lat (msec) : 50=0.68%, 100=2.05%, 250=96.67%, 500=0.60% 00:25:13.400 cpu : usr=1.63%, sys=1.46%, ctx=1188, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job4: (groupid=0, jobs=1): err= 0: pid=1538591: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=444, BW=111MiB/s (117MB/s)(1127MiB/10131msec); 0 zone resets 00:25:13.400 slat (usec): min=19, max=79500, avg=2216.43, stdev=4328.44 00:25:13.400 clat (msec): min=77, max=268, avg=141.61, stdev=15.94 00:25:13.400 lat (msec): min=77, max=268, avg=143.83, stdev=15.60 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 117], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 134], 00:25:13.400 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:25:13.400 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 171], 00:25:13.400 | 99.00th=[ 213], 99.50th=[ 234], 99.90th=[ 259], 99.95th=[ 259], 00:25:13.400 | 99.99th=[ 271] 00:25:13.400 bw ( KiB/s): min=94909, max=124928, per=7.99%, avg=113750.25, stdev=7633.38, samples=20 00:25:13.400 iops : min= 370, max= 488, avg=444.30, stdev=29.91, samples=20 00:25:13.400 lat (msec) : 100=0.58%, 250=99.20%, 500=0.22% 00:25:13.400 cpu : usr=1.58%, sys=1.20%, ctx=1157, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job5: (groupid=0, jobs=1): err= 0: pid=1538599: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=485, BW=121MiB/s (127MB/s)(1230MiB/10127msec); 0 zone resets 00:25:13.400 slat (usec): min=27, max=68988, avg=1976.21, stdev=3569.65 00:25:13.400 clat (msec): min=39, max=252, avg=129.56, stdev=13.93 00:25:13.400 lat (msec): min=40, max=252, avg=131.54, stdev=13.78 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 65], 5.00th=[ 117], 10.00th=[ 123], 20.00th=[ 126], 00:25:13.400 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 132], 60.00th=[ 133], 00:25:13.400 | 70.00th=[ 134], 80.00th=[ 136], 90.00th=[ 136], 95.00th=[ 138], 00:25:13.400 | 99.00th=[ 171], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 245], 00:25:13.400 | 99.99th=[ 253] 00:25:13.400 bw ( KiB/s): min=106196, max=144384, per=8.73%, avg=124375.40, stdev=7643.29, samples=20 00:25:13.400 iops : min= 414, max= 564, avg=485.80, stdev=29.96, samples=20 00:25:13.400 lat (msec) : 50=0.41%, 100=2.40%, 250=97.16%, 500=0.04% 00:25:13.400 cpu : usr=1.27%, sys=1.61%, ctx=1429, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job6: (groupid=0, jobs=1): err= 0: pid=1538611: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=458, BW=115MiB/s (120MB/s)(1163MiB/10144msec); 0 zone resets 00:25:13.400 slat (usec): min=18, max=70907, avg=2126.60, stdev=4093.24 00:25:13.400 clat (msec): min=18, max=302, avg=137.18, stdev=21.49 00:25:13.400 lat (msec): min=18, max=302, avg=139.31, stdev=21.42 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 96], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 125], 00:25:13.400 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:25:13.400 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:25:13.400 | 99.00th=[ 197], 99.50th=[ 245], 99.90th=[ 292], 99.95th=[ 292], 00:25:13.400 | 99.99th=[ 305] 00:25:13.400 bw ( KiB/s): min=94208, max=132608, per=8.25%, avg=117452.80, stdev=13868.74, samples=20 00:25:13.400 iops : min= 368, max= 518, avg=458.80, stdev=54.17, samples=20 00:25:13.400 lat (msec) : 20=0.04%, 100=1.20%, 250=98.28%, 500=0.47% 00:25:13.400 cpu : usr=1.34%, sys=1.37%, ctx=1227, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.400 job7: (groupid=0, jobs=1): err= 0: pid=1538619: Tue Apr 23 21:26:06 2024 00:25:13.400 write: IOPS=461, BW=115MiB/s (121MB/s)(1170MiB/10132msec); 0 zone resets 00:25:13.400 slat (usec): min=19, max=19331, avg=1993.39, stdev=3614.47 00:25:13.400 clat (msec): min=21, max=269, avg=136.57, stdev=18.03 00:25:13.400 lat (msec): min=21, max=269, avg=138.56, stdev=18.11 00:25:13.400 clat percentiles (msec): 00:25:13.400 | 1.00th=[ 72], 5.00th=[ 105], 10.00th=[ 127], 20.00th=[ 132], 00:25:13.400 | 30.00th=[ 134], 40.00th=[ 138], 50.00th=[ 138], 60.00th=[ 140], 00:25:13.400 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 153], 00:25:13.400 | 99.00th=[ 176], 99.50th=[ 218], 99.90th=[ 262], 99.95th=[ 262], 00:25:13.400 | 99.99th=[ 271] 00:25:13.400 bw ( KiB/s): min=110592, max=130048, per=8.29%, avg=118155.25, stdev=5144.85, samples=20 00:25:13.400 iops : min= 432, max= 508, avg=461.50, stdev=20.15, samples=20 00:25:13.400 lat (msec) : 50=0.43%, 100=3.89%, 250=95.47%, 500=0.21% 00:25:13.400 cpu : usr=1.70%, sys=1.25%, ctx=1460, majf=0, minf=1 00:25:13.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:13.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.400 issued rwts: total=0,4678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.401 job8: (groupid=0, jobs=1): err= 0: pid=1538633: Tue Apr 23 21:26:06 2024 00:25:13.401 write: IOPS=516, BW=129MiB/s (135MB/s)(1306MiB/10124msec); 0 zone resets 00:25:13.401 slat (usec): min=18, max=13237, avg=1910.92, stdev=3285.49 00:25:13.401 clat (msec): min=15, max=253, avg=122.08, stdev=19.14 00:25:13.401 lat (msec): min=15, max=253, avg=123.99, stdev=19.13 00:25:13.401 clat percentiles (msec): 00:25:13.401 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:25:13.401 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 132], 00:25:13.401 | 70.00th=[ 134], 80.00th=[ 136], 90.00th=[ 136], 95.00th=[ 138], 00:25:13.401 | 99.00th=[ 148], 99.50th=[ 197], 99.90th=[ 245], 99.95th=[ 245], 00:25:13.401 | 99.99th=[ 253] 00:25:13.401 bw ( KiB/s): min=120832, max=167936, per=9.28%, avg=132137.80, stdev=17648.34, samples=20 00:25:13.401 iops : min= 472, max= 656, avg=516.15, stdev=68.92, samples=20 00:25:13.401 lat (msec) : 20=0.08%, 50=0.38%, 100=23.56%, 250=75.94%, 500=0.04% 00:25:13.401 cpu : usr=1.71%, sys=1.62%, ctx=1356, majf=0, minf=1 00:25:13.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:13.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.401 issued rwts: total=0,5224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.401 job9: (groupid=0, jobs=1): err= 0: pid=1538637: Tue Apr 23 21:26:06 2024 00:25:13.401 write: IOPS=557, BW=139MiB/s (146MB/s)(1415MiB/10148msec); 0 zone resets 00:25:13.401 slat (usec): min=12, max=20775, avg=1752.32, stdev=3125.66 00:25:13.401 clat (msec): min=20, max=301, avg=112.99, stdev=29.95 00:25:13.401 lat (msec): min=20, max=301, avg=114.74, stdev=30.22 00:25:13.401 clat percentiles (msec): 00:25:13.401 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], 00:25:13.401 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 105], 00:25:13.401 | 70.00th=[ 129], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 163], 00:25:13.401 | 99.00th=[ 178], 99.50th=[ 232], 99.90th=[ 292], 99.95th=[ 292], 00:25:13.401 | 99.99th=[ 300] 00:25:13.401 bw ( KiB/s): min=100352, max=178176, per=10.06%, avg=143244.45, stdev=31592.00, samples=20 00:25:13.401 iops : min= 392, max= 696, avg=559.50, stdev=123.44, samples=20 00:25:13.401 lat (msec) : 50=0.34%, 100=59.19%, 250=40.08%, 500=0.39% 00:25:13.401 cpu : usr=1.79%, sys=1.38%, ctx=1512, majf=0, minf=1 00:25:13.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:13.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.401 issued rwts: total=0,5658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.401 job10: (groupid=0, jobs=1): err= 0: pid=1538644: Tue Apr 23 21:26:06 2024 00:25:13.401 write: IOPS=453, BW=113MiB/s (119MB/s)(1149MiB/10132msec); 0 zone resets 00:25:13.401 slat (usec): min=20, max=25031, avg=2107.95, stdev=3708.89 00:25:13.401 clat (msec): min=27, max=269, avg=138.99, stdev=15.74 00:25:13.401 lat (msec): min=27, max=269, avg=141.10, stdev=15.61 00:25:13.401 clat percentiles (msec): 00:25:13.401 | 1.00th=[ 93], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 133], 00:25:13.401 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:25:13.401 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 148], 95.00th=[ 161], 00:25:13.401 | 99.00th=[ 186], 99.50th=[ 218], 99.90th=[ 262], 99.95th=[ 262], 00:25:13.401 | 99.99th=[ 271] 00:25:13.401 bw ( KiB/s): min=107008, max=128512, per=8.14%, avg=115993.60, stdev=5447.64, samples=20 00:25:13.401 iops : min= 418, max= 502, avg=453.10, stdev=21.28, samples=20 00:25:13.401 lat (msec) : 50=0.35%, 100=1.22%, 250=98.22%, 500=0.22% 00:25:13.401 cpu : usr=1.32%, sys=1.27%, ctx=1276, majf=0, minf=1 00:25:13.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:13.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:13.401 issued rwts: total=0,4594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:13.401 00:25:13.401 Run status group 0 (all jobs): 00:25:13.401 WRITE: bw=1391MiB/s (1459MB/s), 111MiB/s-188MiB/s (117MB/s-197MB/s), io=13.8GiB (14.8GB), run=10056-10154msec 00:25:13.401 00:25:13.401 Disk stats (read/write): 00:25:13.401 nvme0n1: ios=49/14645, merge=0/0, ticks=177/1199696, in_queue=1199873, util=97.89% 00:25:13.401 nvme10n1: ios=47/10522, merge=0/0, ticks=2886/1221015, in_queue=1223901, util=100.00% 00:25:13.401 nvme1n1: ios=48/9425, merge=0/0, ticks=2857/1220213, in_queue=1223070, util=99.89% 00:25:13.401 nvme2n1: ios=42/9335, merge=0/0, ticks=2700/1208403, in_queue=1211103, util=99.92% 00:25:13.401 nvme3n1: ios=47/8973, merge=0/0, ticks=2911/1215736, in_queue=1218647, util=99.88% 00:25:13.401 nvme4n1: ios=44/9805, merge=0/0, ticks=1932/1227350, in_queue=1229282, util=99.97% 00:25:13.401 nvme5n1: ios=46/9258, merge=0/0, ticks=2118/1218250, in_queue=1220368, util=99.95% 00:25:13.401 nvme6n1: ios=45/9318, merge=0/0, ticks=108/1229697, in_queue=1229805, util=98.64% 00:25:13.401 nvme7n1: ios=0/10416, merge=0/0, ticks=0/1226760, in_queue=1226760, util=98.71% 00:25:13.401 nvme8n1: ios=0/11269, merge=0/0, ticks=0/1225700, in_queue=1225700, util=98.96% 00:25:13.401 nvme9n1: ios=0/9149, merge=0/0, ticks=0/1228296, in_queue=1228296, util=99.14% 00:25:13.401 21:26:06 -- target/multiconnection.sh@36 -- # sync 00:25:13.401 21:26:07 -- target/multiconnection.sh@37 -- # seq 1 11 00:25:13.401 21:26:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.401 21:26:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:13.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:13.401 21:26:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:13.401 21:26:07 -- common/autotest_common.sh@1205 -- # local i=0 00:25:13.401 21:26:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:13.401 21:26:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:25:13.401 21:26:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:13.401 21:26:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:25:13.401 21:26:07 -- common/autotest_common.sh@1217 -- # return 0 00:25:13.401 21:26:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.401 21:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.401 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:25:13.401 21:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.401 21:26:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.401 21:26:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:13.660 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:13.660 21:26:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:13.660 21:26:07 -- common/autotest_common.sh@1205 -- # local i=0 00:25:13.660 21:26:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:13.660 21:26:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:25:13.660 21:26:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:13.660 21:26:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:25:13.660 21:26:07 -- common/autotest_common.sh@1217 -- # return 0 00:25:13.660 21:26:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:13.660 21:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:13.660 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:25:13.660 21:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:13.660 21:26:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.660 21:26:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:14.282 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:14.282 21:26:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:14.282 21:26:08 -- common/autotest_common.sh@1205 -- # local i=0 00:25:14.282 21:26:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:14.282 21:26:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:25:14.282 21:26:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:14.282 21:26:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:25:14.282 21:26:08 -- common/autotest_common.sh@1217 -- # return 0 00:25:14.282 21:26:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:14.282 21:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.282 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:14.282 21:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.282 21:26:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.282 21:26:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:14.586 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:14.586 21:26:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:14.586 21:26:08 -- common/autotest_common.sh@1205 -- # local i=0 00:25:14.586 21:26:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:25:14.586 21:26:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:14.586 21:26:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:14.586 21:26:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:25:14.586 21:26:08 -- common/autotest_common.sh@1217 -- # return 0 00:25:14.586 21:26:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:14.586 21:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.586 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:14.586 21:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.586 21:26:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.586 21:26:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:14.844 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:14.845 21:26:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:14.845 21:26:09 -- common/autotest_common.sh@1205 -- # local i=0 00:25:14.845 21:26:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:14.845 21:26:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:25:14.845 21:26:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:14.845 21:26:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:25:14.845 21:26:09 -- common/autotest_common.sh@1217 -- # return 0 00:25:14.845 21:26:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:14.845 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.845 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:14.845 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.845 21:26:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.845 21:26:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:15.103 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:15.103 21:26:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:15.103 21:26:09 -- common/autotest_common.sh@1205 -- # local i=0 00:25:15.103 21:26:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:15.103 21:26:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:25:15.103 21:26:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:15.103 21:26:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:25:15.103 21:26:09 -- common/autotest_common.sh@1217 -- # return 0 00:25:15.103 21:26:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:15.103 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.103 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.103 21:26:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.103 21:26:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:15.674 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:15.674 21:26:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:15.674 21:26:09 -- common/autotest_common.sh@1205 -- # local i=0 00:25:15.674 21:26:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:15.674 21:26:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:25:15.674 21:26:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:15.674 21:26:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:25:15.674 21:26:09 -- common/autotest_common.sh@1217 -- # return 0 00:25:15.674 21:26:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:15.674 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.674 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:15.674 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.674 21:26:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.674 21:26:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:15.935 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:15.935 21:26:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:15.935 21:26:10 -- common/autotest_common.sh@1205 -- # local i=0 00:25:15.935 21:26:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:15.935 21:26:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:25:15.935 21:26:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:15.935 21:26:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:25:15.935 21:26:10 -- common/autotest_common.sh@1217 -- # return 0 00:25:15.935 21:26:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:15.935 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.935 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:15.935 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.935 21:26:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.935 21:26:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:16.197 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:16.197 21:26:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:16.197 21:26:10 -- common/autotest_common.sh@1205 -- # local i=0 00:25:16.197 21:26:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:16.197 21:26:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:25:16.197 21:26:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:16.197 21:26:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:25:16.197 21:26:10 -- common/autotest_common.sh@1217 -- # return 0 00:25:16.197 21:26:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:16.197 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.197 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:16.197 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.197 21:26:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.197 21:26:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:16.456 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:16.456 21:26:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:16.456 21:26:10 -- common/autotest_common.sh@1205 -- # local i=0 00:25:16.456 21:26:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:16.456 21:26:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:25:16.456 21:26:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:16.456 21:26:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:25:16.456 21:26:10 -- common/autotest_common.sh@1217 -- # return 0 00:25:16.456 21:26:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:16.456 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.456 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:16.456 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.456 21:26:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.456 21:26:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:16.456 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:16.456 21:26:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:16.456 21:26:10 -- common/autotest_common.sh@1205 -- # local i=0 00:25:16.456 21:26:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:16.456 21:26:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:25:16.457 21:26:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:16.457 21:26:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:25:16.457 21:26:10 -- common/autotest_common.sh@1217 -- # return 0 00:25:16.457 21:26:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:16.457 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.457 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:16.457 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.457 21:26:10 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:16.457 21:26:10 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:16.457 21:26:10 -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:16.457 21:26:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:16.457 21:26:10 -- nvmf/common.sh@117 -- # sync 00:25:16.457 21:26:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.457 21:26:10 -- nvmf/common.sh@120 -- # set +e 00:25:16.457 21:26:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.457 21:26:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.457 rmmod nvme_tcp 00:25:16.457 rmmod nvme_fabrics 00:25:16.715 rmmod nvme_keyring 00:25:16.715 21:26:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.715 21:26:10 -- nvmf/common.sh@124 -- # set -e 00:25:16.715 21:26:10 -- nvmf/common.sh@125 -- # return 0 00:25:16.715 21:26:10 -- nvmf/common.sh@478 -- # '[' -n 1527934 ']' 00:25:16.715 21:26:10 -- nvmf/common.sh@479 -- # killprocess 1527934 00:25:16.715 21:26:10 -- common/autotest_common.sh@936 -- # '[' -z 1527934 ']' 00:25:16.715 21:26:10 -- common/autotest_common.sh@940 -- # kill -0 1527934 00:25:16.715 21:26:10 -- common/autotest_common.sh@941 -- # uname 00:25:16.715 21:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.715 21:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1527934 00:25:16.715 21:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:16.715 21:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:16.715 21:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1527934' 00:25:16.715 killing process with pid 1527934 00:25:16.715 21:26:10 -- common/autotest_common.sh@955 -- # kill 1527934 00:25:16.715 21:26:10 -- common/autotest_common.sh@960 -- # wait 1527934 00:25:18.097 21:26:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:18.097 21:26:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:18.097 21:26:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:18.097 21:26:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.097 21:26:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.097 21:26:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.097 21:26:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.097 21:26:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.005 21:26:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.005 00:25:20.005 real 1m17.055s 00:25:20.005 user 4m57.737s 00:25:20.005 sys 0m19.222s 00:25:20.005 21:26:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:20.005 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.005 ************************************ 00:25:20.005 END TEST nvmf_multiconnection 00:25:20.005 ************************************ 00:25:20.005 21:26:14 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:20.005 21:26:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.005 21:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.005 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:20.005 ************************************ 00:25:20.005 START TEST nvmf_initiator_timeout 00:25:20.005 ************************************ 00:25:20.005 21:26:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:20.005 * Looking for test storage... 00:25:20.005 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:25:20.005 21:26:14 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.005 21:26:14 -- nvmf/common.sh@7 -- # uname -s 00:25:20.005 21:26:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.005 21:26:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.005 21:26:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.005 21:26:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.005 21:26:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.005 21:26:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.005 21:26:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.005 21:26:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.005 21:26:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.005 21:26:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.005 21:26:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:20.005 21:26:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:25:20.006 21:26:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.006 21:26:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.006 21:26:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:20.006 21:26:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.006 21:26:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:20.006 21:26:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.006 21:26:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.006 21:26:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.006 21:26:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.006 21:26:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.006 21:26:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.006 21:26:14 -- paths/export.sh@5 -- # export PATH 00:25:20.006 21:26:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.006 21:26:14 -- nvmf/common.sh@47 -- # : 0 00:25:20.006 21:26:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.006 21:26:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.006 21:26:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.006 21:26:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.006 21:26:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.006 21:26:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.006 21:26:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.006 21:26:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.006 21:26:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.006 21:26:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.006 21:26:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:20.006 21:26:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:20.006 21:26:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.006 21:26:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:20.006 21:26:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:20.006 21:26:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:20.006 21:26:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.006 21:26:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.006 21:26:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.006 21:26:14 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:25:20.006 21:26:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:20.006 21:26:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.006 21:26:14 -- common/autotest_common.sh@10 -- # set +x 00:25:25.288 21:26:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:25.288 21:26:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.288 21:26:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.288 21:26:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.288 21:26:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.288 21:26:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.288 21:26:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.288 21:26:19 -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.288 21:26:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.288 21:26:19 -- nvmf/common.sh@296 -- # e810=() 00:25:25.288 21:26:19 -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.288 21:26:19 -- nvmf/common.sh@297 -- # x722=() 00:25:25.288 21:26:19 -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.288 21:26:19 -- nvmf/common.sh@298 -- # mlx=() 00:25:25.288 21:26:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.288 21:26:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.288 21:26:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.288 21:26:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.288 21:26:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:25.288 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:25.288 21:26:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.288 21:26:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:25.288 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:25.288 21:26:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.288 21:26:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.288 21:26:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.288 21:26:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:25.288 Found net devices under 0000:27:00.0: cvl_0_0 00:25:25.288 21:26:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.288 21:26:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.288 21:26:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.288 21:26:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.288 21:26:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:25.288 Found net devices under 0000:27:00.1: cvl_0_1 00:25:25.288 21:26:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.288 21:26:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:25.288 21:26:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:25.288 21:26:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:25.288 21:26:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.288 21:26:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.288 21:26:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.288 21:26:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.288 21:26:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.288 21:26:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.288 21:26:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.288 21:26:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.288 21:26:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.288 21:26:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.288 21:26:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.288 21:26:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.288 21:26:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.288 21:26:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.289 21:26:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.289 21:26:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.289 21:26:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.289 21:26:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.289 21:26:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.289 21:26:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:25:25.289 00:25:25.289 --- 10.0.0.2 ping statistics --- 00:25:25.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.289 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:25.289 21:26:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:25:25.289 00:25:25.289 --- 10.0.0.1 ping statistics --- 00:25:25.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.289 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:25.289 21:26:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.289 21:26:19 -- nvmf/common.sh@411 -- # return 0 00:25:25.289 21:26:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:25.289 21:26:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.289 21:26:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:25.289 21:26:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:25.289 21:26:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.289 21:26:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:25.289 21:26:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:25.289 21:26:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:25.289 21:26:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:25.289 21:26:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:25.289 21:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 21:26:19 -- nvmf/common.sh@470 -- # nvmfpid=1545144 00:25:25.289 21:26:19 -- nvmf/common.sh@471 -- # waitforlisten 1545144 00:25:25.289 21:26:19 -- common/autotest_common.sh@817 -- # '[' -z 1545144 ']' 00:25:25.289 21:26:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.289 21:26:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.289 21:26:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.289 21:26:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.289 21:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:25.289 21:26:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:25.289 [2024-04-23 21:26:19.514102] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:25:25.289 [2024-04-23 21:26:19.514204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.549 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.549 [2024-04-23 21:26:19.634047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:25.549 [2024-04-23 21:26:19.728082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.549 [2024-04-23 21:26:19.728117] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.549 [2024-04-23 21:26:19.728128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.549 [2024-04-23 21:26:19.728137] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.549 [2024-04-23 21:26:19.728144] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.549 [2024-04-23 21:26:19.728288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.549 [2024-04-23 21:26:19.728390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.549 [2024-04-23 21:26:19.728486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.549 [2024-04-23 21:26:19.728496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.121 21:26:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.121 21:26:20 -- common/autotest_common.sh@850 -- # return 0 00:25:26.121 21:26:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:26.121 21:26:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 21:26:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 Malloc0 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 Delay0 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 [2024-04-23 21:26:20.317346] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.121 21:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.121 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:26.121 [2024-04-23 21:26:20.345577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.121 21:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.121 21:26:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:28.029 21:26:21 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:28.029 21:26:21 -- common/autotest_common.sh@1184 -- # local i=0 00:25:28.029 21:26:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.029 21:26:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:28.029 21:26:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:29.937 21:26:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:29.937 21:26:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:29.937 21:26:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:25:29.937 21:26:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:29.937 21:26:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.937 21:26:23 -- common/autotest_common.sh@1194 -- # return 0 00:25:29.937 21:26:23 -- target/initiator_timeout.sh@35 -- # fio_pid=1546046 00:25:29.937 21:26:23 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:29.937 21:26:23 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:29.937 [global] 00:25:29.937 thread=1 00:25:29.937 invalidate=1 00:25:29.937 rw=write 00:25:29.937 time_based=1 00:25:29.937 runtime=60 00:25:29.937 ioengine=libaio 00:25:29.937 direct=1 00:25:29.937 bs=4096 00:25:29.937 iodepth=1 00:25:29.937 norandommap=0 00:25:29.937 numjobs=1 00:25:29.937 00:25:29.937 verify_dump=1 00:25:29.937 verify_backlog=512 00:25:29.937 verify_state_save=0 00:25:29.937 do_verify=1 00:25:29.937 verify=crc32c-intel 00:25:29.937 [job0] 00:25:29.937 filename=/dev/nvme0n1 00:25:29.937 Could not set queue depth (nvme0n1) 00:25:30.197 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:30.197 fio-3.35 00:25:30.197 Starting 1 thread 00:25:32.726 21:26:26 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:32.726 21:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.726 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.726 true 00:25:32.726 21:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.726 21:26:26 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:32.726 21:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.726 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.726 true 00:25:32.726 21:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.726 21:26:26 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:32.727 21:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.727 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.727 true 00:25:32.727 21:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.727 21:26:26 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:32.727 21:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.727 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:25:32.727 true 00:25:32.727 21:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.727 21:26:26 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:36.015 21:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.015 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.015 true 00:25:36.015 21:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:36.015 21:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.015 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.015 true 00:25:36.015 21:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:36.015 21:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.015 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.015 true 00:25:36.015 21:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:36.015 21:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.015 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:25:36.015 true 00:25:36.015 21:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:36.015 21:26:29 -- target/initiator_timeout.sh@54 -- # wait 1546046 00:26:32.261 00:26:32.261 job0: (groupid=0, jobs=1): err= 0: pid=1546211: Tue Apr 23 21:27:24 2024 00:26:32.261 read: IOPS=65, BW=264KiB/s (270kB/s)(15.5MiB/60040msec) 00:26:32.261 slat (usec): min=3, max=10481, avg=11.78, stdev=166.70 00:26:32.261 clat (usec): min=305, max=45444, avg=4353.62, stdev=12185.41 00:26:32.261 lat (usec): min=310, max=53112, avg=4365.40, stdev=12203.96 00:26:32.261 clat percentiles (usec): 00:26:32.261 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 371], 00:26:32.261 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 416], 00:26:32.261 | 70.00th=[ 445], 80.00th=[ 482], 90.00th=[ 586], 95.00th=[42206], 00:26:32.261 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[44827], 00:26:32.261 | 99.99th=[45351] 00:26:32.261 write: IOPS=68, BW=273KiB/s (279kB/s)(16.0MiB/60040msec); 0 zone resets 00:26:32.261 slat (usec): min=4, max=32427, avg=16.68, stdev=506.55 00:26:32.261 clat (usec): min=171, max=41643k, avg=10415.88, stdev=650670.24 00:26:32.261 lat (usec): min=178, max=41643k, avg=10432.56, stdev=650670.29 00:26:32.261 clat percentiles (usec): 00:26:32.261 | 1.00th=[ 192], 5.00th=[ 212], 10.00th=[ 219], 00:26:32.261 | 20.00th=[ 225], 30.00th=[ 227], 40.00th=[ 231], 00:26:32.261 | 50.00th=[ 235], 60.00th=[ 241], 70.00th=[ 262], 00:26:32.261 | 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 322], 00:26:32.261 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 840], 00:26:32.261 | 99.95th=[ 1020], 99.99th=[17112761] 00:26:32.261 bw ( KiB/s): min= 1008, max= 6488, per=100.00%, avg=4681.14, stdev=2356.87, samples=7 00:26:32.261 iops : min= 252, max= 1622, avg=1170.29, stdev=589.22, samples=7 00:26:32.261 lat (usec) : 250=33.53%, 500=58.23%, 750=3.45%, 1000=0.09% 00:26:32.261 lat (msec) : 2=0.01%, 4=0.01%, 50=4.67%, >=2000=0.01% 00:26:32.261 cpu : usr=0.09%, sys=0.16%, ctx=8059, majf=0, minf=1 00:26:32.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:32.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.261 issued rwts: total=3960,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:32.261 00:26:32.261 Run status group 0 (all jobs): 00:26:32.261 READ: bw=264KiB/s (270kB/s), 264KiB/s-264KiB/s (270kB/s-270kB/s), io=15.5MiB (16.2MB), run=60040-60040msec 00:26:32.261 WRITE: bw=273KiB/s (279kB/s), 273KiB/s-273KiB/s (279kB/s-279kB/s), io=16.0MiB (16.8MB), run=60040-60040msec 00:26:32.261 00:26:32.261 Disk stats (read/write): 00:26:32.261 nvme0n1: ios=4008/4096, merge=0/0, ticks=18409/981, in_queue=19390, util=99.97% 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:32.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:32.261 21:27:24 -- common/autotest_common.sh@1205 -- # local i=0 00:26:32.261 21:27:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:26:32.261 21:27:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:32.261 21:27:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:26:32.261 21:27:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:32.261 21:27:24 -- common/autotest_common.sh@1217 -- # return 0 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:32.261 nvmf hotplug test: fio successful as expected 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.261 21:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.261 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:32.261 21:27:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:32.261 21:27:24 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:32.261 21:27:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:32.261 21:27:24 -- nvmf/common.sh@117 -- # sync 00:26:32.261 21:27:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.261 21:27:24 -- nvmf/common.sh@120 -- # set +e 00:26:32.261 21:27:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.261 21:27:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.261 rmmod nvme_tcp 00:26:32.261 rmmod nvme_fabrics 00:26:32.261 rmmod nvme_keyring 00:26:32.261 21:27:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.261 21:27:24 -- nvmf/common.sh@124 -- # set -e 00:26:32.261 21:27:24 -- nvmf/common.sh@125 -- # return 0 00:26:32.261 21:27:24 -- nvmf/common.sh@478 -- # '[' -n 1545144 ']' 00:26:32.261 21:27:24 -- nvmf/common.sh@479 -- # killprocess 1545144 00:26:32.261 21:27:24 -- common/autotest_common.sh@936 -- # '[' -z 1545144 ']' 00:26:32.261 21:27:24 -- common/autotest_common.sh@940 -- # kill -0 1545144 00:26:32.261 21:27:24 -- common/autotest_common.sh@941 -- # uname 00:26:32.261 21:27:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:32.261 21:27:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1545144 00:26:32.261 21:27:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:32.261 21:27:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:32.261 21:27:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1545144' 00:26:32.261 killing process with pid 1545144 00:26:32.261 21:27:24 -- common/autotest_common.sh@955 -- # kill 1545144 00:26:32.261 21:27:24 -- common/autotest_common.sh@960 -- # wait 1545144 00:26:32.261 21:27:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:32.261 21:27:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:32.261 21:27:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:32.261 21:27:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.261 21:27:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.261 21:27:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.261 21:27:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.261 21:27:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.262 21:27:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:33.262 00:26:33.262 real 1m13.203s 00:26:33.262 user 4m36.036s 00:26:33.262 sys 0m5.080s 00:26:33.262 21:27:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:33.262 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.262 ************************************ 00:26:33.262 END TEST nvmf_initiator_timeout 00:26:33.262 ************************************ 00:26:33.262 21:27:27 -- nvmf/nvmf.sh@70 -- # [[ phy-fallback == phy ]] 00:26:33.262 21:27:27 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:26:33.262 21:27:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:33.262 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.262 21:27:27 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:26:33.262 21:27:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:33.262 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.262 21:27:27 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:26:33.262 21:27:27 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:33.262 21:27:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:33.262 21:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:33.262 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.521 ************************************ 00:26:33.521 START TEST nvmf_multicontroller 00:26:33.521 ************************************ 00:26:33.521 21:27:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:33.521 * Looking for test storage... 00:26:33.521 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:33.521 21:27:27 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.521 21:27:27 -- nvmf/common.sh@7 -- # uname -s 00:26:33.521 21:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.521 21:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.521 21:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.521 21:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.521 21:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.521 21:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.521 21:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.521 21:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.521 21:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.521 21:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.521 21:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:33.521 21:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:33.521 21:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.521 21:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.521 21:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:33.521 21:27:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.521 21:27:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:33.521 21:27:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.521 21:27:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.521 21:27:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.521 21:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.521 21:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.521 21:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.521 21:27:27 -- paths/export.sh@5 -- # export PATH 00:26:33.521 21:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.521 21:27:27 -- nvmf/common.sh@47 -- # : 0 00:26:33.521 21:27:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:33.521 21:27:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:33.521 21:27:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.521 21:27:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.521 21:27:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.521 21:27:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:33.521 21:27:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:33.521 21:27:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:33.521 21:27:27 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.521 21:27:27 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.521 21:27:27 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:33.521 21:27:27 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:33.521 21:27:27 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:33.521 21:27:27 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:33.521 21:27:27 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:33.521 21:27:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:33.521 21:27:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.521 21:27:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:33.521 21:27:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:33.521 21:27:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:33.521 21:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.521 21:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.521 21:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.521 21:27:27 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:26:33.521 21:27:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:33.521 21:27:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:33.521 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:26:38.796 21:27:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:38.796 21:27:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.796 21:27:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.796 21:27:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.796 21:27:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.796 21:27:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.796 21:27:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.796 21:27:32 -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.796 21:27:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.796 21:27:32 -- nvmf/common.sh@296 -- # e810=() 00:26:38.796 21:27:32 -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.796 21:27:32 -- nvmf/common.sh@297 -- # x722=() 00:26:38.796 21:27:32 -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.796 21:27:32 -- nvmf/common.sh@298 -- # mlx=() 00:26:38.796 21:27:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.796 21:27:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.796 21:27:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.796 21:27:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.796 21:27:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:38.796 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:38.796 21:27:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.796 21:27:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:38.796 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:38.796 21:27:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.796 21:27:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.796 21:27:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.796 21:27:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:38.796 Found net devices under 0000:27:00.0: cvl_0_0 00:26:38.796 21:27:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.796 21:27:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.796 21:27:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.796 21:27:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.796 21:27:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:38.796 Found net devices under 0000:27:00.1: cvl_0_1 00:26:38.796 21:27:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.796 21:27:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:38.796 21:27:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:38.796 21:27:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.796 21:27:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.796 21:27:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.796 21:27:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.796 21:27:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.796 21:27:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.796 21:27:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.796 21:27:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.796 21:27:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.796 21:27:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.796 21:27:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.796 21:27:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.796 21:27:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.796 21:27:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.796 21:27:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.796 21:27:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.796 21:27:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.796 21:27:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.796 21:27:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.796 21:27:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:26:38.796 00:26:38.796 --- 10.0.0.2 ping statistics --- 00:26:38.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.796 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:38.796 21:27:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:38.796 00:26:38.796 --- 10.0.0.1 ping statistics --- 00:26:38.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.796 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:38.796 21:27:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.796 21:27:32 -- nvmf/common.sh@411 -- # return 0 00:26:38.796 21:27:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:38.796 21:27:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.796 21:27:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:38.796 21:27:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.796 21:27:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:38.796 21:27:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:38.796 21:27:32 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:38.796 21:27:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:38.796 21:27:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:38.796 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:38.796 21:27:32 -- nvmf/common.sh@470 -- # nvmfpid=1562082 00:26:38.796 21:27:32 -- nvmf/common.sh@471 -- # waitforlisten 1562082 00:26:38.796 21:27:32 -- common/autotest_common.sh@817 -- # '[' -z 1562082 ']' 00:26:38.796 21:27:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.796 21:27:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:38.796 21:27:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.796 21:27:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:38.796 21:27:32 -- common/autotest_common.sh@10 -- # set +x 00:26:38.796 21:27:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:38.796 [2024-04-23 21:27:32.841524] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:26:38.796 [2024-04-23 21:27:32.841638] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.796 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.796 [2024-04-23 21:27:32.960264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:38.796 [2024-04-23 21:27:33.053364] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.796 [2024-04-23 21:27:33.053401] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.796 [2024-04-23 21:27:33.053411] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.796 [2024-04-23 21:27:33.053420] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.796 [2024-04-23 21:27:33.053427] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.796 [2024-04-23 21:27:33.053563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.796 [2024-04-23 21:27:33.053674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.796 [2024-04-23 21:27:33.053685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.368 21:27:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:39.368 21:27:33 -- common/autotest_common.sh@850 -- # return 0 00:26:39.368 21:27:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:39.368 21:27:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:39.368 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.368 21:27:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.368 21:27:33 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:39.368 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.368 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.368 [2024-04-23 21:27:33.595880] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.368 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.368 21:27:33 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:39.368 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.368 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 Malloc0 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 [2024-04-23 21:27:33.682734] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 [2024-04-23 21:27:33.690638] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 Malloc1 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:39.629 21:27:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.629 21:27:33 -- host/multicontroller.sh@44 -- # bdevperf_pid=1562273 00:26:39.629 21:27:33 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.629 21:27:33 -- host/multicontroller.sh@47 -- # waitforlisten 1562273 /var/tmp/bdevperf.sock 00:26:39.629 21:27:33 -- common/autotest_common.sh@817 -- # '[' -z 1562273 ']' 00:26:39.629 21:27:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:39.629 21:27:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:39.629 21:27:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:39.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:39.629 21:27:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:39.629 21:27:33 -- common/autotest_common.sh@10 -- # set +x 00:26:39.629 21:27:33 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:40.570 21:27:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:40.570 21:27:34 -- common/autotest_common.sh@850 -- # return 0 00:26:40.570 21:27:34 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:40.570 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.570 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.570 NVMe0n1 00:26:40.570 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.570 21:27:34 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.570 21:27:34 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:40.570 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.570 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.570 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.570 1 00:26:40.570 21:27:34 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:40.570 21:27:34 -- common/autotest_common.sh@638 -- # local es=0 00:26:40.570 21:27:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:40.570 21:27:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:40.570 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.570 21:27:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:40.570 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.570 21:27:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:40.570 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.570 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.570 request: 00:26:40.570 { 00:26:40.570 "name": "NVMe0", 00:26:40.570 "trtype": "tcp", 00:26:40.570 "traddr": "10.0.0.2", 00:26:40.570 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:40.570 "hostaddr": "10.0.0.2", 00:26:40.570 "hostsvcid": "60000", 00:26:40.570 "adrfam": "ipv4", 00:26:40.570 "trsvcid": "4420", 00:26:40.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.570 "method": "bdev_nvme_attach_controller", 00:26:40.570 "req_id": 1 00:26:40.570 } 00:26:40.570 Got JSON-RPC error response 00:26:40.570 response: 00:26:40.570 { 00:26:40.570 "code": -114, 00:26:40.570 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:40.570 } 00:26:40.570 21:27:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:40.570 21:27:34 -- common/autotest_common.sh@641 -- # es=1 00:26:40.570 21:27:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:40.570 21:27:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:40.570 21:27:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:40.570 21:27:34 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:40.570 21:27:34 -- common/autotest_common.sh@638 -- # local es=0 00:26:40.571 21:27:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:40.571 21:27:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:40.571 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.571 request: 00:26:40.571 { 00:26:40.571 "name": "NVMe0", 00:26:40.571 "trtype": "tcp", 00:26:40.571 "traddr": "10.0.0.2", 00:26:40.571 "hostaddr": "10.0.0.2", 00:26:40.571 "hostsvcid": "60000", 00:26:40.571 "adrfam": "ipv4", 00:26:40.571 "trsvcid": "4420", 00:26:40.571 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:40.571 "method": "bdev_nvme_attach_controller", 00:26:40.571 "req_id": 1 00:26:40.571 } 00:26:40.571 Got JSON-RPC error response 00:26:40.571 response: 00:26:40.571 { 00:26:40.571 "code": -114, 00:26:40.571 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:40.571 } 00:26:40.571 21:27:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # es=1 00:26:40.571 21:27:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:40.571 21:27:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:40.571 21:27:34 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@638 -- # local es=0 00:26:40.571 21:27:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.571 request: 00:26:40.571 { 00:26:40.571 "name": "NVMe0", 00:26:40.571 "trtype": "tcp", 00:26:40.571 "traddr": "10.0.0.2", 00:26:40.571 "hostaddr": "10.0.0.2", 00:26:40.571 "hostsvcid": "60000", 00:26:40.571 "adrfam": "ipv4", 00:26:40.571 "trsvcid": "4420", 00:26:40.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.571 "multipath": "disable", 00:26:40.571 "method": "bdev_nvme_attach_controller", 00:26:40.571 "req_id": 1 00:26:40.571 } 00:26:40.571 Got JSON-RPC error response 00:26:40.571 response: 00:26:40.571 { 00:26:40.571 "code": -114, 00:26:40.571 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:40.571 } 00:26:40.571 21:27:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # es=1 00:26:40.571 21:27:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:40.571 21:27:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:40.571 21:27:34 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:40.571 21:27:34 -- common/autotest_common.sh@638 -- # local es=0 00:26:40.571 21:27:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:40.571 21:27:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:40.571 21:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:40.571 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.571 request: 00:26:40.571 { 00:26:40.571 "name": "NVMe0", 00:26:40.571 "trtype": "tcp", 00:26:40.571 "traddr": "10.0.0.2", 00:26:40.571 "hostaddr": "10.0.0.2", 00:26:40.571 "hostsvcid": "60000", 00:26:40.571 "adrfam": "ipv4", 00:26:40.571 "trsvcid": "4420", 00:26:40.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:40.571 "multipath": "failover", 00:26:40.571 "method": "bdev_nvme_attach_controller", 00:26:40.571 "req_id": 1 00:26:40.571 } 00:26:40.571 Got JSON-RPC error response 00:26:40.571 response: 00:26:40.571 { 00:26:40.571 "code": -114, 00:26:40.571 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:40.571 } 00:26:40.571 21:27:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@641 -- # es=1 00:26:40.571 21:27:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:40.571 21:27:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:40.571 21:27:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:40.571 21:27:34 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:40.571 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.571 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.832 00:26:40.832 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.832 21:27:34 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:40.832 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.832 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.832 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.832 21:27:34 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:40.832 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.832 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.832 00:26:40.832 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.832 21:27:34 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.832 21:27:34 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:40.832 21:27:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.832 21:27:34 -- common/autotest_common.sh@10 -- # set +x 00:26:40.832 21:27:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.832 21:27:34 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:40.832 21:27:34 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:42.213 0 00:26:42.213 21:27:36 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:42.213 21:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.213 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:26:42.213 21:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.213 21:27:36 -- host/multicontroller.sh@100 -- # killprocess 1562273 00:26:42.213 21:27:36 -- common/autotest_common.sh@936 -- # '[' -z 1562273 ']' 00:26:42.213 21:27:36 -- common/autotest_common.sh@940 -- # kill -0 1562273 00:26:42.213 21:27:36 -- common/autotest_common.sh@941 -- # uname 00:26:42.213 21:27:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:42.213 21:27:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1562273 00:26:42.213 21:27:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:42.213 21:27:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:42.213 21:27:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1562273' 00:26:42.213 killing process with pid 1562273 00:26:42.213 21:27:36 -- common/autotest_common.sh@955 -- # kill 1562273 00:26:42.213 21:27:36 -- common/autotest_common.sh@960 -- # wait 1562273 00:26:42.471 21:27:36 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.471 21:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.471 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:26:42.471 21:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.471 21:27:36 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:42.471 21:27:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.471 21:27:36 -- common/autotest_common.sh@10 -- # set +x 00:26:42.471 21:27:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.471 21:27:36 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:42.471 21:27:36 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.471 21:27:36 -- common/autotest_common.sh@1598 -- # read -r file 00:26:42.471 21:27:36 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:42.471 21:27:36 -- common/autotest_common.sh@1597 -- # sort -u 00:26:42.471 21:27:36 -- common/autotest_common.sh@1599 -- # cat 00:26:42.471 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.471 [2024-04-23 21:27:33.844227] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:26:42.471 [2024-04-23 21:27:33.844388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562273 ] 00:26:42.471 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.471 [2024-04-23 21:27:33.975053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.471 [2024-04-23 21:27:34.066260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.471 [2024-04-23 21:27:34.958205] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name c35cf073-9a11-4bff-ac17-dcdf7ead2e33 already exists 00:26:42.471 [2024-04-23 21:27:34.958251] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:c35cf073-9a11-4bff-ac17-dcdf7ead2e33 alias for bdev NVMe1n1 00:26:42.471 [2024-04-23 21:27:34.958267] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:42.471 Running I/O for 1 seconds... 00:26:42.471 00:26:42.471 Latency(us) 00:26:42.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.471 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:42.471 NVMe0n1 : 1.00 24012.13 93.80 0.00 0.00 5321.91 3673.47 12900.24 00:26:42.471 =================================================================================================================== 00:26:42.471 Total : 24012.13 93.80 0.00 0.00 5321.91 3673.47 12900.24 00:26:42.471 Received shutdown signal, test time was about 1.000000 seconds 00:26:42.471 00:26:42.471 Latency(us) 00:26:42.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.471 =================================================================================================================== 00:26:42.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.471 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:42.471 21:27:36 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.471 21:27:36 -- common/autotest_common.sh@1598 -- # read -r file 00:26:42.471 21:27:36 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:42.471 21:27:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:42.471 21:27:36 -- nvmf/common.sh@117 -- # sync 00:26:42.471 21:27:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:42.471 21:27:36 -- nvmf/common.sh@120 -- # set +e 00:26:42.471 21:27:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:42.471 21:27:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:42.471 rmmod nvme_tcp 00:26:42.471 rmmod nvme_fabrics 00:26:42.471 rmmod nvme_keyring 00:26:42.471 21:27:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:42.471 21:27:36 -- nvmf/common.sh@124 -- # set -e 00:26:42.471 21:27:36 -- nvmf/common.sh@125 -- # return 0 00:26:42.471 21:27:36 -- nvmf/common.sh@478 -- # '[' -n 1562082 ']' 00:26:42.471 21:27:36 -- nvmf/common.sh@479 -- # killprocess 1562082 00:26:42.471 21:27:36 -- common/autotest_common.sh@936 -- # '[' -z 1562082 ']' 00:26:42.472 21:27:36 -- common/autotest_common.sh@940 -- # kill -0 1562082 00:26:42.472 21:27:36 -- common/autotest_common.sh@941 -- # uname 00:26:42.472 21:27:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:42.472 21:27:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1562082 00:26:42.472 21:27:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:42.472 21:27:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:42.472 21:27:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1562082' 00:26:42.472 killing process with pid 1562082 00:26:42.472 21:27:36 -- common/autotest_common.sh@955 -- # kill 1562082 00:26:42.472 21:27:36 -- common/autotest_common.sh@960 -- # wait 1562082 00:26:43.037 21:27:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:43.037 21:27:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:43.037 21:27:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:43.037 21:27:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.037 21:27:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:43.037 21:27:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.037 21:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.037 21:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.573 21:27:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:45.573 00:26:45.573 real 0m11.722s 00:26:45.573 user 0m16.445s 00:26:45.574 sys 0m4.638s 00:26:45.574 21:27:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:45.574 21:27:39 -- common/autotest_common.sh@10 -- # set +x 00:26:45.574 ************************************ 00:26:45.574 END TEST nvmf_multicontroller 00:26:45.574 ************************************ 00:26:45.574 21:27:39 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:45.574 21:27:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:45.574 21:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:45.574 21:27:39 -- common/autotest_common.sh@10 -- # set +x 00:26:45.574 ************************************ 00:26:45.574 START TEST nvmf_aer 00:26:45.574 ************************************ 00:26:45.574 21:27:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:45.574 * Looking for test storage... 00:26:45.574 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:45.574 21:27:39 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.574 21:27:39 -- nvmf/common.sh@7 -- # uname -s 00:26:45.574 21:27:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.574 21:27:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.574 21:27:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.574 21:27:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.574 21:27:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.574 21:27:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.574 21:27:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.574 21:27:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.574 21:27:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.574 21:27:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.574 21:27:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:45.574 21:27:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:45.574 21:27:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.574 21:27:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.574 21:27:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:45.574 21:27:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.574 21:27:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:45.574 21:27:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.574 21:27:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.574 21:27:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.574 21:27:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.574 21:27:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.574 21:27:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.574 21:27:39 -- paths/export.sh@5 -- # export PATH 00:26:45.574 21:27:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.574 21:27:39 -- nvmf/common.sh@47 -- # : 0 00:26:45.574 21:27:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:45.574 21:27:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:45.574 21:27:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.574 21:27:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.574 21:27:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.574 21:27:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:45.574 21:27:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:45.574 21:27:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:45.574 21:27:39 -- host/aer.sh@11 -- # nvmftestinit 00:26:45.574 21:27:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:45.574 21:27:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.574 21:27:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:45.574 21:27:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:45.574 21:27:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:45.574 21:27:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.574 21:27:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.574 21:27:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.574 21:27:39 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:26:45.574 21:27:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:45.574 21:27:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.574 21:27:39 -- common/autotest_common.sh@10 -- # set +x 00:26:50.848 21:27:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:50.848 21:27:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:50.848 21:27:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:50.848 21:27:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:50.848 21:27:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:50.848 21:27:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:50.848 21:27:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:50.848 21:27:44 -- nvmf/common.sh@295 -- # net_devs=() 00:26:50.848 21:27:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:50.848 21:27:44 -- nvmf/common.sh@296 -- # e810=() 00:26:50.848 21:27:44 -- nvmf/common.sh@296 -- # local -ga e810 00:26:50.848 21:27:44 -- nvmf/common.sh@297 -- # x722=() 00:26:50.848 21:27:44 -- nvmf/common.sh@297 -- # local -ga x722 00:26:50.848 21:27:44 -- nvmf/common.sh@298 -- # mlx=() 00:26:50.848 21:27:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:50.848 21:27:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.848 21:27:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:50.848 21:27:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:50.848 21:27:44 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:50.848 21:27:44 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:50.848 21:27:44 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.849 21:27:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:50.849 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:50.849 21:27:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:50.849 21:27:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:50.849 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:50.849 21:27:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.849 21:27:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.849 21:27:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.849 21:27:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:50.849 Found net devices under 0000:27:00.0: cvl_0_0 00:26:50.849 21:27:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.849 21:27:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:50.849 21:27:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.849 21:27:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.849 21:27:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:50.849 Found net devices under 0000:27:00.1: cvl_0_1 00:26:50.849 21:27:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.849 21:27:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:50.849 21:27:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:50.849 21:27:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.849 21:27:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.849 21:27:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.849 21:27:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:50.849 21:27:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.849 21:27:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.849 21:27:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:50.849 21:27:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.849 21:27:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.849 21:27:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:50.849 21:27:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:50.849 21:27:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.849 21:27:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.849 21:27:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.849 21:27:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.849 21:27:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:50.849 21:27:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.849 21:27:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.849 21:27:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.849 21:27:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:50.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:26:50.849 00:26:50.849 --- 10.0.0.2 ping statistics --- 00:26:50.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.849 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:26:50.849 21:27:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.503 ms 00:26:50.849 00:26:50.849 --- 10.0.0.1 ping statistics --- 00:26:50.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.849 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:26:50.849 21:27:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.849 21:27:44 -- nvmf/common.sh@411 -- # return 0 00:26:50.849 21:27:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:50.849 21:27:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.849 21:27:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:50.849 21:27:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.849 21:27:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:50.849 21:27:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:50.849 21:27:44 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:50.849 21:27:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:50.849 21:27:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:50.849 21:27:44 -- common/autotest_common.sh@10 -- # set +x 00:26:50.849 21:27:44 -- nvmf/common.sh@470 -- # nvmfpid=1566850 00:26:50.849 21:27:44 -- nvmf/common.sh@471 -- # waitforlisten 1566850 00:26:50.849 21:27:44 -- common/autotest_common.sh@817 -- # '[' -z 1566850 ']' 00:26:50.849 21:27:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.849 21:27:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.849 21:27:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.849 21:27:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.849 21:27:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.849 21:27:44 -- common/autotest_common.sh@10 -- # set +x 00:26:50.849 [2024-04-23 21:27:45.060551] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:26:50.849 [2024-04-23 21:27:45.060661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.108 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.108 [2024-04-23 21:27:45.181621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.108 [2024-04-23 21:27:45.274296] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.108 [2024-04-23 21:27:45.274331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.108 [2024-04-23 21:27:45.274342] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.108 [2024-04-23 21:27:45.274351] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.108 [2024-04-23 21:27:45.274358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.108 [2024-04-23 21:27:45.274411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.108 [2024-04-23 21:27:45.274520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.108 [2024-04-23 21:27:45.274664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.108 [2024-04-23 21:27:45.274673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.676 21:27:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.676 21:27:45 -- common/autotest_common.sh@850 -- # return 0 00:26:51.676 21:27:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:51.676 21:27:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 21:27:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.676 21:27:45 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 [2024-04-23 21:27:45.796583] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 Malloc0 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 [2024-04-23 21:27:45.861906] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:51.676 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.676 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:26:51.676 [2024-04-23 21:27:45.869644] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:51.676 [ 00:26:51.676 { 00:26:51.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:51.676 "subtype": "Discovery", 00:26:51.676 "listen_addresses": [], 00:26:51.676 "allow_any_host": true, 00:26:51.676 "hosts": [] 00:26:51.676 }, 00:26:51.676 { 00:26:51.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:51.676 "subtype": "NVMe", 00:26:51.676 "listen_addresses": [ 00:26:51.676 { 00:26:51.676 "transport": "TCP", 00:26:51.676 "trtype": "TCP", 00:26:51.676 "adrfam": "IPv4", 00:26:51.676 "traddr": "10.0.0.2", 00:26:51.676 "trsvcid": "4420" 00:26:51.676 } 00:26:51.676 ], 00:26:51.676 "allow_any_host": true, 00:26:51.676 "hosts": [], 00:26:51.676 "serial_number": "SPDK00000000000001", 00:26:51.676 "model_number": "SPDK bdev Controller", 00:26:51.676 "max_namespaces": 2, 00:26:51.676 "min_cntlid": 1, 00:26:51.676 "max_cntlid": 65519, 00:26:51.676 "namespaces": [ 00:26:51.676 { 00:26:51.676 "nsid": 1, 00:26:51.676 "bdev_name": "Malloc0", 00:26:51.676 "name": "Malloc0", 00:26:51.676 "nguid": "EB4489C6C88C48E6AF0B2A16B0FBFF41", 00:26:51.676 "uuid": "eb4489c6-c88c-48e6-af0b-2a16b0fbff41" 00:26:51.676 } 00:26:51.676 ] 00:26:51.676 } 00:26:51.676 ] 00:26:51.676 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.676 21:27:45 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:51.676 21:27:45 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:51.676 21:27:45 -- host/aer.sh@33 -- # aerpid=1567085 00:26:51.676 21:27:45 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:51.676 21:27:45 -- common/autotest_common.sh@1251 -- # local i=0 00:26:51.676 21:27:45 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:51.676 21:27:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.676 21:27:45 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:26:51.676 21:27:45 -- common/autotest_common.sh@1254 -- # i=1 00:26:51.676 21:27:45 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:51.937 21:27:45 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.937 21:27:45 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:26:51.937 21:27:45 -- common/autotest_common.sh@1254 -- # i=2 00:26:51.937 21:27:45 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:51.937 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.937 21:27:46 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.937 21:27:46 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:26:51.937 21:27:46 -- common/autotest_common.sh@1254 -- # i=3 00:26:51.937 21:27:46 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:51.937 21:27:46 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.937 21:27:46 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:51.937 21:27:46 -- common/autotest_common.sh@1262 -- # return 0 00:26:51.937 21:27:46 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:51.937 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.937 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 Malloc1 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:52.267 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.267 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:52.267 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.267 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 [ 00:26:52.267 { 00:26:52.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:52.267 "subtype": "Discovery", 00:26:52.267 "listen_addresses": [], 00:26:52.267 "allow_any_host": true, 00:26:52.267 "hosts": [] 00:26:52.267 }, 00:26:52.267 { 00:26:52.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.267 "subtype": "NVMe", 00:26:52.267 "listen_addresses": [ 00:26:52.267 { 00:26:52.267 "transport": "TCP", 00:26:52.267 "trtype": "TCP", 00:26:52.267 "adrfam": "IPv4", 00:26:52.267 "traddr": "10.0.0.2", 00:26:52.267 "trsvcid": "4420" 00:26:52.267 } 00:26:52.267 ], 00:26:52.267 "allow_any_host": true, 00:26:52.267 "hosts": [], 00:26:52.267 "serial_number": "SPDK00000000000001", 00:26:52.267 "model_number": "SPDK bdev Controller", 00:26:52.267 "max_namespaces": 2, 00:26:52.267 "min_cntlid": 1, 00:26:52.267 "max_cntlid": 65519, 00:26:52.267 "namespaces": [ 00:26:52.267 { 00:26:52.267 "nsid": 1, 00:26:52.267 "bdev_name": "Malloc0", 00:26:52.267 "name": "Malloc0", 00:26:52.267 "nguid": "EB4489C6C88C48E6AF0B2A16B0FBFF41", 00:26:52.267 "uuid": "eb4489c6-c88c-48e6-af0b-2a16b0fbff41" 00:26:52.267 }, 00:26:52.267 { 00:26:52.267 "nsid": 2, 00:26:52.267 "bdev_name": "Malloc1", 00:26:52.267 "name": "Malloc1", 00:26:52.267 "nguid": "198C1F780BAB4E969C0D14F712B0249F", 00:26:52.267 "uuid": "198c1f78-0bab-4e96-9c0d-14f712b0249f" 00:26:52.267 } 00:26:52.267 ] 00:26:52.267 } 00:26:52.267 ] 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@43 -- # wait 1567085 00:26:52.267 Asynchronous Event Request test 00:26:52.267 Attaching to 10.0.0.2 00:26:52.267 Attached to 10.0.0.2 00:26:52.267 Registering asynchronous event callbacks... 00:26:52.267 Starting namespace attribute notice tests for all controllers... 00:26:52.267 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:52.267 aer_cb - Changed Namespace 00:26:52.267 Cleaning up... 00:26:52.267 21:27:46 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:52.267 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.267 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:52.267 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.267 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.267 21:27:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.267 21:27:46 -- common/autotest_common.sh@10 -- # set +x 00:26:52.267 21:27:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.267 21:27:46 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:52.267 21:27:46 -- host/aer.sh@51 -- # nvmftestfini 00:26:52.267 21:27:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:52.267 21:27:46 -- nvmf/common.sh@117 -- # sync 00:26:52.267 21:27:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.267 21:27:46 -- nvmf/common.sh@120 -- # set +e 00:26:52.267 21:27:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.267 21:27:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.268 rmmod nvme_tcp 00:26:52.268 rmmod nvme_fabrics 00:26:52.268 rmmod nvme_keyring 00:26:52.587 21:27:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.587 21:27:46 -- nvmf/common.sh@124 -- # set -e 00:26:52.587 21:27:46 -- nvmf/common.sh@125 -- # return 0 00:26:52.587 21:27:46 -- nvmf/common.sh@478 -- # '[' -n 1566850 ']' 00:26:52.587 21:27:46 -- nvmf/common.sh@479 -- # killprocess 1566850 00:26:52.587 21:27:46 -- common/autotest_common.sh@936 -- # '[' -z 1566850 ']' 00:26:52.587 21:27:46 -- common/autotest_common.sh@940 -- # kill -0 1566850 00:26:52.587 21:27:46 -- common/autotest_common.sh@941 -- # uname 00:26:52.587 21:27:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:52.587 21:27:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1566850 00:26:52.587 21:27:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:52.587 21:27:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:52.587 21:27:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1566850' 00:26:52.587 killing process with pid 1566850 00:26:52.587 21:27:46 -- common/autotest_common.sh@955 -- # kill 1566850 00:26:52.587 [2024-04-23 21:27:46.594963] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:52.587 21:27:46 -- common/autotest_common.sh@960 -- # wait 1566850 00:26:52.879 21:27:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:52.879 21:27:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:52.879 21:27:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:52.879 21:27:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.879 21:27:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.879 21:27:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.879 21:27:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.879 21:27:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.413 21:27:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.413 00:26:55.413 real 0m9.724s 00:26:55.413 user 0m8.140s 00:26:55.413 sys 0m4.618s 00:26:55.413 21:27:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:55.413 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:26:55.413 ************************************ 00:26:55.413 END TEST nvmf_aer 00:26:55.413 ************************************ 00:26:55.413 21:27:49 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.413 21:27:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:55.413 21:27:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.413 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:26:55.413 ************************************ 00:26:55.413 START TEST nvmf_async_init 00:26:55.413 ************************************ 00:26:55.413 21:27:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:55.413 * Looking for test storage... 00:26:55.413 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:55.413 21:27:49 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.413 21:27:49 -- nvmf/common.sh@7 -- # uname -s 00:26:55.413 21:27:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.413 21:27:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.413 21:27:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.413 21:27:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.413 21:27:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.413 21:27:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.413 21:27:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.413 21:27:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.413 21:27:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.413 21:27:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.413 21:27:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:55.413 21:27:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:26:55.413 21:27:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.413 21:27:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.413 21:27:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:55.413 21:27:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.413 21:27:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:55.413 21:27:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.413 21:27:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.413 21:27:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.413 21:27:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.413 21:27:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.413 21:27:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.413 21:27:49 -- paths/export.sh@5 -- # export PATH 00:26:55.413 21:27:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.413 21:27:49 -- nvmf/common.sh@47 -- # : 0 00:26:55.413 21:27:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.413 21:27:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.413 21:27:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.413 21:27:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.414 21:27:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.414 21:27:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.414 21:27:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.414 21:27:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.414 21:27:49 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:55.414 21:27:49 -- host/async_init.sh@14 -- # null_block_size=512 00:26:55.414 21:27:49 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:55.414 21:27:49 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:55.414 21:27:49 -- host/async_init.sh@20 -- # tr -d - 00:26:55.414 21:27:49 -- host/async_init.sh@20 -- # uuidgen 00:26:55.414 21:27:49 -- host/async_init.sh@20 -- # nguid=d748730592464cffa61bfb064aa8eb03 00:26:55.414 21:27:49 -- host/async_init.sh@22 -- # nvmftestinit 00:26:55.414 21:27:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:55.414 21:27:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.414 21:27:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:55.414 21:27:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:55.414 21:27:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:55.414 21:27:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.414 21:27:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.414 21:27:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.414 21:27:49 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:26:55.414 21:27:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:55.414 21:27:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.414 21:27:49 -- common/autotest_common.sh@10 -- # set +x 00:27:00.690 21:27:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:00.690 21:27:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:00.690 21:27:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:00.690 21:27:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:00.690 21:27:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:00.690 21:27:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:00.690 21:27:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:00.691 21:27:54 -- nvmf/common.sh@295 -- # net_devs=() 00:27:00.691 21:27:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:00.691 21:27:54 -- nvmf/common.sh@296 -- # e810=() 00:27:00.691 21:27:54 -- nvmf/common.sh@296 -- # local -ga e810 00:27:00.691 21:27:54 -- nvmf/common.sh@297 -- # x722=() 00:27:00.691 21:27:54 -- nvmf/common.sh@297 -- # local -ga x722 00:27:00.691 21:27:54 -- nvmf/common.sh@298 -- # mlx=() 00:27:00.691 21:27:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:00.691 21:27:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.691 21:27:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:00.691 21:27:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.691 21:27:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:00.691 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:00.691 21:27:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.691 21:27:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:00.691 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:00.691 21:27:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.691 21:27:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.691 21:27:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.691 21:27:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:00.691 Found net devices under 0000:27:00.0: cvl_0_0 00:27:00.691 21:27:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.691 21:27:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.691 21:27:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.691 21:27:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.691 21:27:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:00.691 Found net devices under 0000:27:00.1: cvl_0_1 00:27:00.691 21:27:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.691 21:27:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:00.691 21:27:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:00.691 21:27:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.691 21:27:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.691 21:27:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.691 21:27:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:00.691 21:27:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.691 21:27:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.691 21:27:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:00.691 21:27:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.691 21:27:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.691 21:27:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:00.691 21:27:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:00.691 21:27:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.691 21:27:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.691 21:27:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.691 21:27:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.691 21:27:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:00.691 21:27:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.691 21:27:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.691 21:27:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.691 21:27:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:00.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:27:00.691 00:27:00.691 --- 10.0.0.2 ping statistics --- 00:27:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.691 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:27:00.691 21:27:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.487 ms 00:27:00.691 00:27:00.691 --- 10.0.0.1 ping statistics --- 00:27:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.691 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:27:00.691 21:27:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.691 21:27:54 -- nvmf/common.sh@411 -- # return 0 00:27:00.691 21:27:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:00.691 21:27:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.691 21:27:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:00.691 21:27:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.691 21:27:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:00.691 21:27:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:00.691 21:27:54 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:00.691 21:27:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:00.691 21:27:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:00.691 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:27:00.691 21:27:54 -- nvmf/common.sh@470 -- # nvmfpid=1571262 00:27:00.691 21:27:54 -- nvmf/common.sh@471 -- # waitforlisten 1571262 00:27:00.691 21:27:54 -- common/autotest_common.sh@817 -- # '[' -z 1571262 ']' 00:27:00.691 21:27:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.691 21:27:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:00.691 21:27:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.691 21:27:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:00.691 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:27:00.691 21:27:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:00.951 [2024-04-23 21:27:54.972528] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:27:00.951 [2024-04-23 21:27:54.972658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.951 [2024-04-23 21:27:55.101762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.951 [2024-04-23 21:27:55.200240] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.951 [2024-04-23 21:27:55.200275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.951 [2024-04-23 21:27:55.200285] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.951 [2024-04-23 21:27:55.200298] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.951 [2024-04-23 21:27:55.200306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.951 [2024-04-23 21:27:55.200336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.520 21:27:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.520 21:27:55 -- common/autotest_common.sh@850 -- # return 0 00:27:01.520 21:27:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:01.520 21:27:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 21:27:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.520 21:27:55 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 [2024-04-23 21:27:55.702682] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 null0 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d748730592464cffa61bfb064aa8eb03 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.520 [2024-04-23 21:27:55.742815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.520 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.520 21:27:55 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:01.520 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.520 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 nvme0n1 00:27:01.779 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.779 21:27:55 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:01.779 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.779 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 [ 00:27:01.779 { 00:27:01.779 "name": "nvme0n1", 00:27:01.779 "aliases": [ 00:27:01.779 "d7487305-9246-4cff-a61b-fb064aa8eb03" 00:27:01.779 ], 00:27:01.779 "product_name": "NVMe disk", 00:27:01.779 "block_size": 512, 00:27:01.779 "num_blocks": 2097152, 00:27:01.779 "uuid": "d7487305-9246-4cff-a61b-fb064aa8eb03", 00:27:01.779 "assigned_rate_limits": { 00:27:01.779 "rw_ios_per_sec": 0, 00:27:01.779 "rw_mbytes_per_sec": 0, 00:27:01.779 "r_mbytes_per_sec": 0, 00:27:01.779 "w_mbytes_per_sec": 0 00:27:01.779 }, 00:27:01.779 "claimed": false, 00:27:01.779 "zoned": false, 00:27:01.779 "supported_io_types": { 00:27:01.779 "read": true, 00:27:01.779 "write": true, 00:27:01.779 "unmap": false, 00:27:01.779 "write_zeroes": true, 00:27:01.779 "flush": true, 00:27:01.779 "reset": true, 00:27:01.779 "compare": true, 00:27:01.779 "compare_and_write": true, 00:27:01.779 "abort": true, 00:27:01.779 "nvme_admin": true, 00:27:01.779 "nvme_io": true 00:27:01.779 }, 00:27:01.779 "memory_domains": [ 00:27:01.779 { 00:27:01.779 "dma_device_id": "system", 00:27:01.779 "dma_device_type": 1 00:27:01.779 } 00:27:01.779 ], 00:27:01.779 "driver_specific": { 00:27:01.779 "nvme": [ 00:27:01.779 { 00:27:01.779 "trid": { 00:27:01.779 "trtype": "TCP", 00:27:01.779 "adrfam": "IPv4", 00:27:01.779 "traddr": "10.0.0.2", 00:27:01.779 "trsvcid": "4420", 00:27:01.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:01.779 }, 00:27:01.779 "ctrlr_data": { 00:27:01.779 "cntlid": 1, 00:27:01.779 "vendor_id": "0x8086", 00:27:01.779 "model_number": "SPDK bdev Controller", 00:27:01.779 "serial_number": "00000000000000000000", 00:27:01.779 "firmware_revision": "24.05", 00:27:01.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:01.779 "oacs": { 00:27:01.779 "security": 0, 00:27:01.779 "format": 0, 00:27:01.779 "firmware": 0, 00:27:01.779 "ns_manage": 0 00:27:01.779 }, 00:27:01.779 "multi_ctrlr": true, 00:27:01.779 "ana_reporting": false 00:27:01.779 }, 00:27:01.779 "vs": { 00:27:01.779 "nvme_version": "1.3" 00:27:01.779 }, 00:27:01.779 "ns_data": { 00:27:01.779 "id": 1, 00:27:01.779 "can_share": true 00:27:01.779 } 00:27:01.779 } 00:27:01.779 ], 00:27:01.779 "mp_policy": "active_passive" 00:27:01.779 } 00:27:01.779 } 00:27:01.779 ] 00:27:01.779 21:27:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:01.779 21:27:55 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:01.779 21:27:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:01.779 21:27:55 -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 [2024-04-23 21:27:55.990932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:01.779 [2024-04-23 21:27:55.991015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:27:02.037 [2024-04-23 21:27:56.122737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:02.037 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.037 21:27:56 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:02.037 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.037 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.037 [ 00:27:02.037 { 00:27:02.037 "name": "nvme0n1", 00:27:02.037 "aliases": [ 00:27:02.037 "d7487305-9246-4cff-a61b-fb064aa8eb03" 00:27:02.037 ], 00:27:02.037 "product_name": "NVMe disk", 00:27:02.037 "block_size": 512, 00:27:02.038 "num_blocks": 2097152, 00:27:02.038 "uuid": "d7487305-9246-4cff-a61b-fb064aa8eb03", 00:27:02.038 "assigned_rate_limits": { 00:27:02.038 "rw_ios_per_sec": 0, 00:27:02.038 "rw_mbytes_per_sec": 0, 00:27:02.038 "r_mbytes_per_sec": 0, 00:27:02.038 "w_mbytes_per_sec": 0 00:27:02.038 }, 00:27:02.038 "claimed": false, 00:27:02.038 "zoned": false, 00:27:02.038 "supported_io_types": { 00:27:02.038 "read": true, 00:27:02.038 "write": true, 00:27:02.038 "unmap": false, 00:27:02.038 "write_zeroes": true, 00:27:02.038 "flush": true, 00:27:02.038 "reset": true, 00:27:02.038 "compare": true, 00:27:02.038 "compare_and_write": true, 00:27:02.038 "abort": true, 00:27:02.038 "nvme_admin": true, 00:27:02.038 "nvme_io": true 00:27:02.038 }, 00:27:02.038 "memory_domains": [ 00:27:02.038 { 00:27:02.038 "dma_device_id": "system", 00:27:02.038 "dma_device_type": 1 00:27:02.038 } 00:27:02.038 ], 00:27:02.038 "driver_specific": { 00:27:02.038 "nvme": [ 00:27:02.038 { 00:27:02.038 "trid": { 00:27:02.038 "trtype": "TCP", 00:27:02.038 "adrfam": "IPv4", 00:27:02.038 "traddr": "10.0.0.2", 00:27:02.038 "trsvcid": "4420", 00:27:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.038 }, 00:27:02.038 "ctrlr_data": { 00:27:02.038 "cntlid": 2, 00:27:02.038 "vendor_id": "0x8086", 00:27:02.038 "model_number": "SPDK bdev Controller", 00:27:02.038 "serial_number": "00000000000000000000", 00:27:02.038 "firmware_revision": "24.05", 00:27:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.038 "oacs": { 00:27:02.038 "security": 0, 00:27:02.038 "format": 0, 00:27:02.038 "firmware": 0, 00:27:02.038 "ns_manage": 0 00:27:02.038 }, 00:27:02.038 "multi_ctrlr": true, 00:27:02.038 "ana_reporting": false 00:27:02.038 }, 00:27:02.038 "vs": { 00:27:02.038 "nvme_version": "1.3" 00:27:02.038 }, 00:27:02.038 "ns_data": { 00:27:02.038 "id": 1, 00:27:02.038 "can_share": true 00:27:02.038 } 00:27:02.038 } 00:27:02.038 ], 00:27:02.038 "mp_policy": "active_passive" 00:27:02.038 } 00:27:02.038 } 00:27:02.038 ] 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@53 -- # mktemp 00:27:02.038 21:27:56 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.71S1xM1UaI 00:27:02.038 21:27:56 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:02.038 21:27:56 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.71S1xM1UaI 00:27:02.038 21:27:56 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 [2024-04-23 21:27:56.167098] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:02.038 [2024-04-23 21:27:56.167239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.71S1xM1UaI 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 [2024-04-23 21:27:56.175108] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.71S1xM1UaI 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 [2024-04-23 21:27:56.183093] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:02.038 [2024-04-23 21:27:56.183157] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:02.038 nvme0n1 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 [ 00:27:02.038 { 00:27:02.038 "name": "nvme0n1", 00:27:02.038 "aliases": [ 00:27:02.038 "d7487305-9246-4cff-a61b-fb064aa8eb03" 00:27:02.038 ], 00:27:02.038 "product_name": "NVMe disk", 00:27:02.038 "block_size": 512, 00:27:02.038 "num_blocks": 2097152, 00:27:02.038 "uuid": "d7487305-9246-4cff-a61b-fb064aa8eb03", 00:27:02.038 "assigned_rate_limits": { 00:27:02.038 "rw_ios_per_sec": 0, 00:27:02.038 "rw_mbytes_per_sec": 0, 00:27:02.038 "r_mbytes_per_sec": 0, 00:27:02.038 "w_mbytes_per_sec": 0 00:27:02.038 }, 00:27:02.038 "claimed": false, 00:27:02.038 "zoned": false, 00:27:02.038 "supported_io_types": { 00:27:02.038 "read": true, 00:27:02.038 "write": true, 00:27:02.038 "unmap": false, 00:27:02.038 "write_zeroes": true, 00:27:02.038 "flush": true, 00:27:02.038 "reset": true, 00:27:02.038 "compare": true, 00:27:02.038 "compare_and_write": true, 00:27:02.038 "abort": true, 00:27:02.038 "nvme_admin": true, 00:27:02.038 "nvme_io": true 00:27:02.038 }, 00:27:02.038 "memory_domains": [ 00:27:02.038 { 00:27:02.038 "dma_device_id": "system", 00:27:02.038 "dma_device_type": 1 00:27:02.038 } 00:27:02.038 ], 00:27:02.038 "driver_specific": { 00:27:02.038 "nvme": [ 00:27:02.038 { 00:27:02.038 "trid": { 00:27:02.038 "trtype": "TCP", 00:27:02.038 "adrfam": "IPv4", 00:27:02.038 "traddr": "10.0.0.2", 00:27:02.038 "trsvcid": "4421", 00:27:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.038 }, 00:27:02.038 "ctrlr_data": { 00:27:02.038 "cntlid": 3, 00:27:02.038 "vendor_id": "0x8086", 00:27:02.038 "model_number": "SPDK bdev Controller", 00:27:02.038 "serial_number": "00000000000000000000", 00:27:02.038 "firmware_revision": "24.05", 00:27:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.038 "oacs": { 00:27:02.038 "security": 0, 00:27:02.038 "format": 0, 00:27:02.038 "firmware": 0, 00:27:02.038 "ns_manage": 0 00:27:02.038 }, 00:27:02.038 "multi_ctrlr": true, 00:27:02.038 "ana_reporting": false 00:27:02.038 }, 00:27:02.038 "vs": { 00:27:02.038 "nvme_version": "1.3" 00:27:02.038 }, 00:27:02.038 "ns_data": { 00:27:02.038 "id": 1, 00:27:02.038 "can_share": true 00:27:02.038 } 00:27:02.038 } 00:27:02.038 ], 00:27:02.038 "mp_policy": "active_passive" 00:27:02.038 } 00:27:02.038 } 00:27:02.038 ] 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.038 21:27:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.038 21:27:56 -- common/autotest_common.sh@10 -- # set +x 00:27:02.038 21:27:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.038 21:27:56 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.71S1xM1UaI 00:27:02.038 21:27:56 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:02.038 21:27:56 -- host/async_init.sh@78 -- # nvmftestfini 00:27:02.038 21:27:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:02.038 21:27:56 -- nvmf/common.sh@117 -- # sync 00:27:02.038 21:27:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.038 21:27:56 -- nvmf/common.sh@120 -- # set +e 00:27:02.038 21:27:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.038 21:27:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.038 rmmod nvme_tcp 00:27:02.038 rmmod nvme_fabrics 00:27:02.038 rmmod nvme_keyring 00:27:02.298 21:27:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.298 21:27:56 -- nvmf/common.sh@124 -- # set -e 00:27:02.298 21:27:56 -- nvmf/common.sh@125 -- # return 0 00:27:02.298 21:27:56 -- nvmf/common.sh@478 -- # '[' -n 1571262 ']' 00:27:02.298 21:27:56 -- nvmf/common.sh@479 -- # killprocess 1571262 00:27:02.298 21:27:56 -- common/autotest_common.sh@936 -- # '[' -z 1571262 ']' 00:27:02.298 21:27:56 -- common/autotest_common.sh@940 -- # kill -0 1571262 00:27:02.298 21:27:56 -- common/autotest_common.sh@941 -- # uname 00:27:02.298 21:27:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.298 21:27:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1571262 00:27:02.298 21:27:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.298 21:27:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.298 21:27:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1571262' 00:27:02.298 killing process with pid 1571262 00:27:02.298 21:27:56 -- common/autotest_common.sh@955 -- # kill 1571262 00:27:02.298 [2024-04-23 21:27:56.369109] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:02.298 [2024-04-23 21:27:56.369143] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:02.298 21:27:56 -- common/autotest_common.sh@960 -- # wait 1571262 00:27:02.557 21:27:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:02.558 21:27:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:02.558 21:27:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:02.558 21:27:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.558 21:27:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.558 21:27:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.558 21:27:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.558 21:27:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.097 21:27:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.097 00:27:05.097 real 0m9.614s 00:27:05.097 user 0m3.413s 00:27:05.097 sys 0m4.494s 00:27:05.097 21:27:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:05.097 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:27:05.097 ************************************ 00:27:05.097 END TEST nvmf_async_init 00:27:05.097 ************************************ 00:27:05.097 21:27:58 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:05.098 21:27:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:05.098 21:27:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.098 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:27:05.098 ************************************ 00:27:05.098 START TEST dma 00:27:05.098 ************************************ 00:27:05.098 21:27:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:05.098 * Looking for test storage... 00:27:05.098 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:27:05.098 21:27:59 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.098 21:27:59 -- nvmf/common.sh@7 -- # uname -s 00:27:05.098 21:27:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.098 21:27:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.098 21:27:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.098 21:27:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.098 21:27:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.098 21:27:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.098 21:27:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.098 21:27:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.098 21:27:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.098 21:27:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.098 21:27:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:05.098 21:27:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:05.098 21:27:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.098 21:27:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.098 21:27:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:05.098 21:27:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.098 21:27:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:05.098 21:27:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.098 21:27:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.098 21:27:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.098 21:27:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@5 -- # export PATH 00:27:05.098 21:27:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- nvmf/common.sh@47 -- # : 0 00:27:05.098 21:27:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.098 21:27:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.098 21:27:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.098 21:27:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.098 21:27:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.098 21:27:59 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:05.098 21:27:59 -- host/dma.sh@13 -- # exit 0 00:27:05.098 00:27:05.098 real 0m0.093s 00:27:05.098 user 0m0.034s 00:27:05.098 sys 0m0.065s 00:27:05.098 21:27:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:05.098 21:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:05.098 ************************************ 00:27:05.098 END TEST dma 00:27:05.098 ************************************ 00:27:05.098 21:27:59 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:05.098 21:27:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:05.098 21:27:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.098 21:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:05.098 ************************************ 00:27:05.098 START TEST nvmf_identify 00:27:05.098 ************************************ 00:27:05.098 21:27:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:05.098 * Looking for test storage... 00:27:05.098 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:27:05.098 21:27:59 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.098 21:27:59 -- nvmf/common.sh@7 -- # uname -s 00:27:05.098 21:27:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.098 21:27:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.098 21:27:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.098 21:27:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.098 21:27:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.098 21:27:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.098 21:27:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.098 21:27:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.098 21:27:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.098 21:27:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.098 21:27:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:05.098 21:27:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:05.098 21:27:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.098 21:27:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.098 21:27:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:05.098 21:27:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.098 21:27:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:05.098 21:27:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.098 21:27:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.098 21:27:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.098 21:27:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- paths/export.sh@5 -- # export PATH 00:27:05.098 21:27:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.098 21:27:59 -- nvmf/common.sh@47 -- # : 0 00:27:05.098 21:27:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.098 21:27:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.098 21:27:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.098 21:27:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.098 21:27:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.098 21:27:59 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.098 21:27:59 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.098 21:27:59 -- host/identify.sh@14 -- # nvmftestinit 00:27:05.098 21:27:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:05.098 21:27:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.098 21:27:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:05.098 21:27:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:05.098 21:27:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:05.098 21:27:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.098 21:27:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.098 21:27:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.098 21:27:59 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:27:05.098 21:27:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:05.098 21:27:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.098 21:27:59 -- common/autotest_common.sh@10 -- # set +x 00:27:10.375 21:28:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.375 21:28:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.375 21:28:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.375 21:28:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.375 21:28:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.375 21:28:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.375 21:28:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.375 21:28:04 -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.375 21:28:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.375 21:28:04 -- nvmf/common.sh@296 -- # e810=() 00:27:10.375 21:28:04 -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.375 21:28:04 -- nvmf/common.sh@297 -- # x722=() 00:27:10.375 21:28:04 -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.375 21:28:04 -- nvmf/common.sh@298 -- # mlx=() 00:27:10.375 21:28:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.375 21:28:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.375 21:28:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.375 21:28:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.375 21:28:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:10.375 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:10.375 21:28:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.375 21:28:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:10.375 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:10.375 21:28:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.375 21:28:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.375 21:28:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.375 21:28:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:10.375 Found net devices under 0000:27:00.0: cvl_0_0 00:27:10.375 21:28:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.375 21:28:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.375 21:28:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.375 21:28:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.375 21:28:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:10.375 Found net devices under 0000:27:00.1: cvl_0_1 00:27:10.375 21:28:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.375 21:28:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:10.375 21:28:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:10.375 21:28:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:10.375 21:28:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.375 21:28:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.375 21:28:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.375 21:28:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.375 21:28:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.375 21:28:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.375 21:28:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.375 21:28:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.375 21:28:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.375 21:28:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.375 21:28:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.375 21:28:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.375 21:28:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.375 21:28:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.375 21:28:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.375 21:28:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.375 21:28:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.636 21:28:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.636 21:28:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.636 21:28:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:27:10.636 00:27:10.636 --- 10.0.0.2 ping statistics --- 00:27:10.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.636 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:10.636 21:28:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:27:10.636 00:27:10.636 --- 10.0.0.1 ping statistics --- 00:27:10.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.636 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:27:10.636 21:28:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.636 21:28:04 -- nvmf/common.sh@411 -- # return 0 00:27:10.636 21:28:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:10.636 21:28:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.636 21:28:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:10.636 21:28:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:10.636 21:28:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.636 21:28:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:10.636 21:28:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:10.636 21:28:04 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:10.636 21:28:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:10.636 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:10.636 21:28:04 -- host/identify.sh@19 -- # nvmfpid=1575516 00:27:10.636 21:28:04 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.636 21:28:04 -- host/identify.sh@23 -- # waitforlisten 1575516 00:27:10.636 21:28:04 -- common/autotest_common.sh@817 -- # '[' -z 1575516 ']' 00:27:10.636 21:28:04 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.636 21:28:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.636 21:28:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:10.636 21:28:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.636 21:28:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:10.636 21:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:10.636 [2024-04-23 21:28:04.802454] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:27:10.636 [2024-04-23 21:28:04.802556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.636 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.896 [2024-04-23 21:28:04.921267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.896 [2024-04-23 21:28:05.019727] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.896 [2024-04-23 21:28:05.019763] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.896 [2024-04-23 21:28:05.019774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.896 [2024-04-23 21:28:05.019783] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.896 [2024-04-23 21:28:05.019790] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.896 [2024-04-23 21:28:05.019852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.896 [2024-04-23 21:28:05.019967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.896 [2024-04-23 21:28:05.020062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.896 [2024-04-23 21:28:05.020073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.466 21:28:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:11.466 21:28:05 -- common/autotest_common.sh@850 -- # return 0 00:27:11.466 21:28:05 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 [2024-04-23 21:28:05.519897] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:11.466 21:28:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 21:28:05 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 Malloc0 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 [2024-04-23 21:28:05.624409] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.466 21:28:05 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:11.466 21:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.466 21:28:05 -- common/autotest_common.sh@10 -- # set +x 00:27:11.466 [2024-04-23 21:28:05.640129] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:11.466 [ 00:27:11.466 { 00:27:11.466 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:11.466 "subtype": "Discovery", 00:27:11.466 "listen_addresses": [ 00:27:11.466 { 00:27:11.466 "transport": "TCP", 00:27:11.466 "trtype": "TCP", 00:27:11.466 "adrfam": "IPv4", 00:27:11.466 "traddr": "10.0.0.2", 00:27:11.466 "trsvcid": "4420" 00:27:11.466 } 00:27:11.466 ], 00:27:11.466 "allow_any_host": true, 00:27:11.466 "hosts": [] 00:27:11.466 }, 00:27:11.466 { 00:27:11.466 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.466 "subtype": "NVMe", 00:27:11.467 "listen_addresses": [ 00:27:11.467 { 00:27:11.467 "transport": "TCP", 00:27:11.467 "trtype": "TCP", 00:27:11.467 "adrfam": "IPv4", 00:27:11.467 "traddr": "10.0.0.2", 00:27:11.467 "trsvcid": "4420" 00:27:11.467 } 00:27:11.467 ], 00:27:11.467 "allow_any_host": true, 00:27:11.467 "hosts": [], 00:27:11.467 "serial_number": "SPDK00000000000001", 00:27:11.467 "model_number": "SPDK bdev Controller", 00:27:11.467 "max_namespaces": 32, 00:27:11.467 "min_cntlid": 1, 00:27:11.467 "max_cntlid": 65519, 00:27:11.467 "namespaces": [ 00:27:11.467 { 00:27:11.467 "nsid": 1, 00:27:11.467 "bdev_name": "Malloc0", 00:27:11.467 "name": "Malloc0", 00:27:11.467 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:11.467 "eui64": "ABCDEF0123456789", 00:27:11.467 "uuid": "f9e3a7c7-ca91-408c-bf2c-f35fa96f3539" 00:27:11.467 } 00:27:11.467 ] 00:27:11.467 } 00:27:11.467 ] 00:27:11.467 21:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:11.467 21:28:05 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:11.467 [2024-04-23 21:28:05.692545] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:27:11.467 [2024-04-23 21:28:05.692659] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575824 ] 00:27:11.467 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.730 [2024-04-23 21:28:05.747805] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:11.730 [2024-04-23 21:28:05.747896] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:11.730 [2024-04-23 21:28:05.747906] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:11.730 [2024-04-23 21:28:05.747928] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:11.730 [2024-04-23 21:28:05.747944] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:11.730 [2024-04-23 21:28:05.751671] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:11.730 [2024-04-23 21:28:05.751723] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:27:11.730 [2024-04-23 21:28:05.758642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:11.730 [2024-04-23 21:28:05.758663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:11.730 [2024-04-23 21:28:05.758670] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:11.730 [2024-04-23 21:28:05.758677] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:11.730 [2024-04-23 21:28:05.758728] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.758737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.758745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.758772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:11.730 [2024-04-23 21:28:05.758794] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.765642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.765656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.765661] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.765673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.765690] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:11.730 [2024-04-23 21:28:05.765704] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:11.730 [2024-04-23 21:28:05.765712] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:11.730 [2024-04-23 21:28:05.765732] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.765738] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.765745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.765763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.730 [2024-04-23 21:28:05.765780] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.766062] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.766071] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.766083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.766098] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:11.730 [2024-04-23 21:28:05.766114] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:11.730 [2024-04-23 21:28:05.766125] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766138] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.766151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.730 [2024-04-23 21:28:05.766165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.766267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.766276] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.766280] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766285] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.766291] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:11.730 [2024-04-23 21:28:05.766302] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.766310] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.766332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.730 [2024-04-23 21:28:05.766346] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.766541] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.766548] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.766554] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766559] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.766565] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.766577] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766588] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.766598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.730 [2024-04-23 21:28:05.766610] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.766716] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.766723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.766727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.766738] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:11.730 [2024-04-23 21:28:05.766744] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.766754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.766863] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:11.730 [2024-04-23 21:28:05.766877] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.766889] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.766900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.730 [2024-04-23 21:28:05.766911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.730 [2024-04-23 21:28:05.766922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.730 [2024-04-23 21:28:05.767072] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.730 [2024-04-23 21:28:05.767079] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.730 [2024-04-23 21:28:05.767083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.767087] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.730 [2024-04-23 21:28:05.767095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:11.730 [2024-04-23 21:28:05.767106] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.767113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.730 [2024-04-23 21:28:05.767119] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.767128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.731 [2024-04-23 21:28:05.767140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.731 [2024-04-23 21:28:05.767273] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.731 [2024-04-23 21:28:05.767280] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.731 [2024-04-23 21:28:05.767284] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.767288] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.731 [2024-04-23 21:28:05.767294] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:11.731 [2024-04-23 21:28:05.767300] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.767310] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:11.731 [2024-04-23 21:28:05.767323] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.767337] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.767343] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.767353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.731 [2024-04-23 21:28:05.767365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.731 [2024-04-23 21:28:05.767512] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.731 [2024-04-23 21:28:05.767519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.731 [2024-04-23 21:28:05.767523] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.767529] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:27:11.731 [2024-04-23 21:28:05.767536] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:11.731 [2024-04-23 21:28:05.767542] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.767693] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.767699] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.731 [2024-04-23 21:28:05.811655] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.731 [2024-04-23 21:28:05.811659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.731 [2024-04-23 21:28:05.811680] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:11.731 [2024-04-23 21:28:05.811693] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:11.731 [2024-04-23 21:28:05.811699] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:11.731 [2024-04-23 21:28:05.811712] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:11.731 [2024-04-23 21:28:05.811719] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:11.731 [2024-04-23 21:28:05.811726] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.811736] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.811751] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811758] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811764] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.811776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:11.731 [2024-04-23 21:28:05.811791] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.731 [2024-04-23 21:28:05.811955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.731 [2024-04-23 21:28:05.811962] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.731 [2024-04-23 21:28:05.811967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.731 [2024-04-23 21:28:05.811981] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.811995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.731 [2024-04-23 21:28:05.812012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812021] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.731 [2024-04-23 21:28:05.812034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812038] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812042] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.731 [2024-04-23 21:28:05.812055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812059] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812063] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.731 [2024-04-23 21:28:05.812075] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.812086] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:11.731 [2024-04-23 21:28:05.812094] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.731 [2024-04-23 21:28:05.812123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.731 [2024-04-23 21:28:05.812129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:27:11.731 [2024-04-23 21:28:05.812134] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:27:11.731 [2024-04-23 21:28:05.812140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.731 [2024-04-23 21:28:05.812145] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.731 [2024-04-23 21:28:05.812280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.731 [2024-04-23 21:28:05.812287] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.731 [2024-04-23 21:28:05.812291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.731 [2024-04-23 21:28:05.812304] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:11.731 [2024-04-23 21:28:05.812311] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:11.731 [2024-04-23 21:28:05.812328] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.731 [2024-04-23 21:28:05.812359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.731 [2024-04-23 21:28:05.812502] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.731 [2024-04-23 21:28:05.812509] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.731 [2024-04-23 21:28:05.812514] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812520] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:27:11.731 [2024-04-23 21:28:05.812526] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:11.731 [2024-04-23 21:28:05.812534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812543] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812548] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.731 [2024-04-23 21:28:05.812600] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.731 [2024-04-23 21:28:05.812605] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812610] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.731 [2024-04-23 21:28:05.812627] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:11.731 [2024-04-23 21:28:05.812668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812674] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.731 [2024-04-23 21:28:05.812684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.731 [2024-04-23 21:28:05.812692] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.731 [2024-04-23 21:28:05.812705] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:11.732 [2024-04-23 21:28:05.812714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.732 [2024-04-23 21:28:05.812726] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.732 [2024-04-23 21:28:05.812732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:11.732 [2024-04-23 21:28:05.812939] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.732 [2024-04-23 21:28:05.812947] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.732 [2024-04-23 21:28:05.812952] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.812957] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:27:11.732 [2024-04-23 21:28:05.812964] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:27:11.732 [2024-04-23 21:28:05.812970] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.812978] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.812986] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.812993] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.732 [2024-04-23 21:28:05.813003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.732 [2024-04-23 21:28:05.813007] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.813012] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:11.732 [2024-04-23 21:28:05.853859] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.732 [2024-04-23 21:28:05.853874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.732 [2024-04-23 21:28:05.853878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.853883] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.732 [2024-04-23 21:28:05.853906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.853912] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.732 [2024-04-23 21:28:05.853923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.732 [2024-04-23 21:28:05.853939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.732 [2024-04-23 21:28:05.854066] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.732 [2024-04-23 21:28:05.854073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.732 [2024-04-23 21:28:05.854077] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.854082] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:27:11.732 [2024-04-23 21:28:05.854087] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:27:11.732 [2024-04-23 21:28:05.854092] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.854264] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.854269] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.732 [2024-04-23 21:28:05.898657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.732 [2024-04-23 21:28:05.898661] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898666] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.732 [2024-04-23 21:28:05.898681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898687] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.732 [2024-04-23 21:28:05.898697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.732 [2024-04-23 21:28:05.898717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.732 [2024-04-23 21:28:05.898865] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.732 [2024-04-23 21:28:05.898872] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.732 [2024-04-23 21:28:05.898876] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898880] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:27:11.732 [2024-04-23 21:28:05.898885] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:27:11.732 [2024-04-23 21:28:05.898890] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898901] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.898906] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.939831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.732 [2024-04-23 21:28:05.939845] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.732 [2024-04-23 21:28:05.939850] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.732 [2024-04-23 21:28:05.939855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.732 ===================================================== 00:27:11.732 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:11.732 ===================================================== 00:27:11.732 Controller Capabilities/Features 00:27:11.732 ================================ 00:27:11.732 Vendor ID: 0000 00:27:11.732 Subsystem Vendor ID: 0000 00:27:11.732 Serial Number: .................... 00:27:11.732 Model Number: ........................................ 00:27:11.732 Firmware Version: 24.05 00:27:11.732 Recommended Arb Burst: 0 00:27:11.732 IEEE OUI Identifier: 00 00 00 00:27:11.732 Multi-path I/O 00:27:11.732 May have multiple subsystem ports: No 00:27:11.732 May have multiple controllers: No 00:27:11.732 Associated with SR-IOV VF: No 00:27:11.732 Max Data Transfer Size: 131072 00:27:11.732 Max Number of Namespaces: 0 00:27:11.732 Max Number of I/O Queues: 1024 00:27:11.732 NVMe Specification Version (VS): 1.3 00:27:11.732 NVMe Specification Version (Identify): 1.3 00:27:11.732 Maximum Queue Entries: 128 00:27:11.732 Contiguous Queues Required: Yes 00:27:11.732 Arbitration Mechanisms Supported 00:27:11.732 Weighted Round Robin: Not Supported 00:27:11.732 Vendor Specific: Not Supported 00:27:11.732 Reset Timeout: 15000 ms 00:27:11.732 Doorbell Stride: 4 bytes 00:27:11.732 NVM Subsystem Reset: Not Supported 00:27:11.732 Command Sets Supported 00:27:11.732 NVM Command Set: Supported 00:27:11.732 Boot Partition: Not Supported 00:27:11.732 Memory Page Size Minimum: 4096 bytes 00:27:11.732 Memory Page Size Maximum: 4096 bytes 00:27:11.732 Persistent Memory Region: Not Supported 00:27:11.732 Optional Asynchronous Events Supported 00:27:11.732 Namespace Attribute Notices: Not Supported 00:27:11.732 Firmware Activation Notices: Not Supported 00:27:11.732 ANA Change Notices: Not Supported 00:27:11.732 PLE Aggregate Log Change Notices: Not Supported 00:27:11.732 LBA Status Info Alert Notices: Not Supported 00:27:11.732 EGE Aggregate Log Change Notices: Not Supported 00:27:11.732 Normal NVM Subsystem Shutdown event: Not Supported 00:27:11.732 Zone Descriptor Change Notices: Not Supported 00:27:11.732 Discovery Log Change Notices: Supported 00:27:11.732 Controller Attributes 00:27:11.732 128-bit Host Identifier: Not Supported 00:27:11.732 Non-Operational Permissive Mode: Not Supported 00:27:11.732 NVM Sets: Not Supported 00:27:11.732 Read Recovery Levels: Not Supported 00:27:11.732 Endurance Groups: Not Supported 00:27:11.732 Predictable Latency Mode: Not Supported 00:27:11.732 Traffic Based Keep ALive: Not Supported 00:27:11.732 Namespace Granularity: Not Supported 00:27:11.732 SQ Associations: Not Supported 00:27:11.732 UUID List: Not Supported 00:27:11.732 Multi-Domain Subsystem: Not Supported 00:27:11.732 Fixed Capacity Management: Not Supported 00:27:11.732 Variable Capacity Management: Not Supported 00:27:11.732 Delete Endurance Group: Not Supported 00:27:11.732 Delete NVM Set: Not Supported 00:27:11.732 Extended LBA Formats Supported: Not Supported 00:27:11.732 Flexible Data Placement Supported: Not Supported 00:27:11.732 00:27:11.732 Controller Memory Buffer Support 00:27:11.732 ================================ 00:27:11.732 Supported: No 00:27:11.732 00:27:11.732 Persistent Memory Region Support 00:27:11.732 ================================ 00:27:11.732 Supported: No 00:27:11.732 00:27:11.732 Admin Command Set Attributes 00:27:11.732 ============================ 00:27:11.732 Security Send/Receive: Not Supported 00:27:11.732 Format NVM: Not Supported 00:27:11.732 Firmware Activate/Download: Not Supported 00:27:11.732 Namespace Management: Not Supported 00:27:11.732 Device Self-Test: Not Supported 00:27:11.732 Directives: Not Supported 00:27:11.732 NVMe-MI: Not Supported 00:27:11.732 Virtualization Management: Not Supported 00:27:11.732 Doorbell Buffer Config: Not Supported 00:27:11.732 Get LBA Status Capability: Not Supported 00:27:11.732 Command & Feature Lockdown Capability: Not Supported 00:27:11.732 Abort Command Limit: 1 00:27:11.732 Async Event Request Limit: 4 00:27:11.732 Number of Firmware Slots: N/A 00:27:11.732 Firmware Slot 1 Read-Only: N/A 00:27:11.732 Firmware Activation Without Reset: N/A 00:27:11.732 Multiple Update Detection Support: N/A 00:27:11.732 Firmware Update Granularity: No Information Provided 00:27:11.732 Per-Namespace SMART Log: No 00:27:11.732 Asymmetric Namespace Access Log Page: Not Supported 00:27:11.732 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:11.732 Command Effects Log Page: Not Supported 00:27:11.732 Get Log Page Extended Data: Supported 00:27:11.732 Telemetry Log Pages: Not Supported 00:27:11.733 Persistent Event Log Pages: Not Supported 00:27:11.733 Supported Log Pages Log Page: May Support 00:27:11.733 Commands Supported & Effects Log Page: Not Supported 00:27:11.733 Feature Identifiers & Effects Log Page:May Support 00:27:11.733 NVMe-MI Commands & Effects Log Page: May Support 00:27:11.733 Data Area 4 for Telemetry Log: Not Supported 00:27:11.733 Error Log Page Entries Supported: 128 00:27:11.733 Keep Alive: Not Supported 00:27:11.733 00:27:11.733 NVM Command Set Attributes 00:27:11.733 ========================== 00:27:11.733 Submission Queue Entry Size 00:27:11.733 Max: 1 00:27:11.733 Min: 1 00:27:11.733 Completion Queue Entry Size 00:27:11.733 Max: 1 00:27:11.733 Min: 1 00:27:11.733 Number of Namespaces: 0 00:27:11.733 Compare Command: Not Supported 00:27:11.733 Write Uncorrectable Command: Not Supported 00:27:11.733 Dataset Management Command: Not Supported 00:27:11.733 Write Zeroes Command: Not Supported 00:27:11.733 Set Features Save Field: Not Supported 00:27:11.733 Reservations: Not Supported 00:27:11.733 Timestamp: Not Supported 00:27:11.733 Copy: Not Supported 00:27:11.733 Volatile Write Cache: Not Present 00:27:11.733 Atomic Write Unit (Normal): 1 00:27:11.733 Atomic Write Unit (PFail): 1 00:27:11.733 Atomic Compare & Write Unit: 1 00:27:11.733 Fused Compare & Write: Supported 00:27:11.733 Scatter-Gather List 00:27:11.733 SGL Command Set: Supported 00:27:11.733 SGL Keyed: Supported 00:27:11.733 SGL Bit Bucket Descriptor: Not Supported 00:27:11.733 SGL Metadata Pointer: Not Supported 00:27:11.733 Oversized SGL: Not Supported 00:27:11.733 SGL Metadata Address: Not Supported 00:27:11.733 SGL Offset: Supported 00:27:11.733 Transport SGL Data Block: Not Supported 00:27:11.733 Replay Protected Memory Block: Not Supported 00:27:11.733 00:27:11.733 Firmware Slot Information 00:27:11.733 ========================= 00:27:11.733 Active slot: 0 00:27:11.733 00:27:11.733 00:27:11.733 Error Log 00:27:11.733 ========= 00:27:11.733 00:27:11.733 Active Namespaces 00:27:11.733 ================= 00:27:11.733 Discovery Log Page 00:27:11.733 ================== 00:27:11.733 Generation Counter: 2 00:27:11.733 Number of Records: 2 00:27:11.733 Record Format: 0 00:27:11.733 00:27:11.733 Discovery Log Entry 0 00:27:11.733 ---------------------- 00:27:11.733 Transport Type: 3 (TCP) 00:27:11.733 Address Family: 1 (IPv4) 00:27:11.733 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:11.733 Entry Flags: 00:27:11.733 Duplicate Returned Information: 1 00:27:11.733 Explicit Persistent Connection Support for Discovery: 1 00:27:11.733 Transport Requirements: 00:27:11.733 Secure Channel: Not Required 00:27:11.733 Port ID: 0 (0x0000) 00:27:11.733 Controller ID: 65535 (0xffff) 00:27:11.733 Admin Max SQ Size: 128 00:27:11.733 Transport Service Identifier: 4420 00:27:11.733 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:11.733 Transport Address: 10.0.0.2 00:27:11.733 Discovery Log Entry 1 00:27:11.733 ---------------------- 00:27:11.733 Transport Type: 3 (TCP) 00:27:11.733 Address Family: 1 (IPv4) 00:27:11.733 Subsystem Type: 2 (NVM Subsystem) 00:27:11.733 Entry Flags: 00:27:11.733 Duplicate Returned Information: 0 00:27:11.733 Explicit Persistent Connection Support for Discovery: 0 00:27:11.733 Transport Requirements: 00:27:11.733 Secure Channel: Not Required 00:27:11.733 Port ID: 0 (0x0000) 00:27:11.733 Controller ID: 65535 (0xffff) 00:27:11.733 Admin Max SQ Size: 128 00:27:11.733 Transport Service Identifier: 4420 00:27:11.733 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:11.733 Transport Address: 10.0.0.2 [2024-04-23 21:28:05.939974] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:11.733 [2024-04-23 21:28:05.939991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.733 [2024-04-23 21:28:05.940000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.733 [2024-04-23 21:28:05.940006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.733 [2024-04-23 21:28:05.940013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.733 [2024-04-23 21:28:05.940027] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940033] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940039] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.940050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.940068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.733 [2024-04-23 21:28:05.940319] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.733 [2024-04-23 21:28:05.940327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.733 [2024-04-23 21:28:05.940335] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940340] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.733 [2024-04-23 21:28:05.940350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940360] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.940369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.940383] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.733 [2024-04-23 21:28:05.940530] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.733 [2024-04-23 21:28:05.940536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.733 [2024-04-23 21:28:05.940540] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940545] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.733 [2024-04-23 21:28:05.940553] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:11.733 [2024-04-23 21:28:05.940559] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:11.733 [2024-04-23 21:28:05.940572] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940577] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940582] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.940590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.940601] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.733 [2024-04-23 21:28:05.940727] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.733 [2024-04-23 21:28:05.940733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.733 [2024-04-23 21:28:05.940738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940742] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.733 [2024-04-23 21:28:05.940753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.940770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.940780] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.733 [2024-04-23 21:28:05.940873] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.733 [2024-04-23 21:28:05.940879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.733 [2024-04-23 21:28:05.940883] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940888] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.733 [2024-04-23 21:28:05.940898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.940907] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.940915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.940924] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.733 [2024-04-23 21:28:05.941026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.733 [2024-04-23 21:28:05.941033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.733 [2024-04-23 21:28:05.941036] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.941041] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.733 [2024-04-23 21:28:05.941051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.941056] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.733 [2024-04-23 21:28:05.941060] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.733 [2024-04-23 21:28:05.941073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.733 [2024-04-23 21:28:05.941082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.941229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.941238] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.941242] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941247] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.941257] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941261] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941265] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.941274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.941284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.941529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.941540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.941544] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941548] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.941559] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941563] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941567] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.941577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.941587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.941684] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.941690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.941694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941699] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.941708] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941713] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.941725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.941735] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.941869] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.941876] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.941880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941884] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.941895] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.941903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.941911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.941920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.942020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.942028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.942032] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942036] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.942046] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942051] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942055] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.942062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.942072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.942171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.942178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.942182] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.942196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942201] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942205] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.942213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.942222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.942312] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.942319] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.942323] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942327] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.942337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.942353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.942363] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.942474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.942480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.942484] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.942498] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942503] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.942507] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.942514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.942524] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.942623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.946640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.946647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.946652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.734 [2024-04-23 21:28:05.946664] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.946668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.946673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.734 [2024-04-23 21:28:05.946681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.734 [2024-04-23 21:28:05.946693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.734 [2024-04-23 21:28:05.946813] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.734 [2024-04-23 21:28:05.946823] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.734 [2024-04-23 21:28:05.946827] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.734 [2024-04-23 21:28:05.946831] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:11.735 [2024-04-23 21:28:05.946840] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:27:11.735 00:27:11.735 21:28:05 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:11.998 [2024-04-23 21:28:06.018379] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:27:11.998 [2024-04-23 21:28:06.018464] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575831 ] 00:27:11.998 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.998 [2024-04-23 21:28:06.067431] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:11.998 [2024-04-23 21:28:06.067502] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:11.998 [2024-04-23 21:28:06.067512] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:11.998 [2024-04-23 21:28:06.067532] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:11.998 [2024-04-23 21:28:06.067544] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:11.998 [2024-04-23 21:28:06.068092] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:11.998 [2024-04-23 21:28:06.068122] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:27:11.998 [2024-04-23 21:28:06.078639] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:11.998 [2024-04-23 21:28:06.078655] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:11.998 [2024-04-23 21:28:06.078662] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:11.998 [2024-04-23 21:28:06.078667] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:11.998 [2024-04-23 21:28:06.078709] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.078718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.078725] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.078746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:11.998 [2024-04-23 21:28:06.078772] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.086643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.086656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.086660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086667] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.086682] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:11.998 [2024-04-23 21:28:06.086695] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:11.998 [2024-04-23 21:28:06.086702] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:11.998 [2024-04-23 21:28:06.086718] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086724] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086730] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.086745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.086763] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.086897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.086906] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.086915] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086921] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.086929] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:11.998 [2024-04-23 21:28:06.086939] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:11.998 [2024-04-23 21:28:06.086948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.086960] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.086972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.086987] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.087088] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.087097] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.087101] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087105] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.087114] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:11.998 [2024-04-23 21:28:06.087127] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087143] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087148] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.087157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.087174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.087415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.087422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.087425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.087436] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087452] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087457] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.087466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.087478] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.087579] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.087586] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.087590] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087594] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.087601] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:11.998 [2024-04-23 21:28:06.087607] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087724] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:11.998 [2024-04-23 21:28:06.087732] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087743] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087748] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087754] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.087763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.087774] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.087865] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.087872] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.087876] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.998 [2024-04-23 21:28:06.087886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:11.998 [2024-04-23 21:28:06.087897] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.998 [2024-04-23 21:28:06.087908] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.998 [2024-04-23 21:28:06.087919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.998 [2024-04-23 21:28:06.087928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.998 [2024-04-23 21:28:06.088019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.998 [2024-04-23 21:28:06.088026] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.998 [2024-04-23 21:28:06.088030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.999 [2024-04-23 21:28:06.088040] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:11.999 [2024-04-23 21:28:06.088046] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088056] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:11.999 [2024-04-23 21:28:06.088066] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.999 [2024-04-23 21:28:06.088105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.999 [2024-04-23 21:28:06.088388] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.999 [2024-04-23 21:28:06.088395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.999 [2024-04-23 21:28:06.088399] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088404] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:27:11.999 [2024-04-23 21:28:06.088413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:11.999 [2024-04-23 21:28:06.088420] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088430] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088435] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088592] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.999 [2024-04-23 21:28:06.088598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.999 [2024-04-23 21:28:06.088602] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088607] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.999 [2024-04-23 21:28:06.088618] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:11.999 [2024-04-23 21:28:06.088625] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:11.999 [2024-04-23 21:28:06.088636] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:11.999 [2024-04-23 21:28:06.088643] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:11.999 [2024-04-23 21:28:06.088649] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:11.999 [2024-04-23 21:28:06.088655] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088669] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088678] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088684] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:11.999 [2024-04-23 21:28:06.088710] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.999 [2024-04-23 21:28:06.088821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.999 [2024-04-23 21:28:06.088828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.999 [2024-04-23 21:28:06.088832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088837] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:27:11.999 [2024-04-23 21:28:06.088845] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088851] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088856] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.999 [2024-04-23 21:28:06.088873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.999 [2024-04-23 21:28:06.088897] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088906] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.999 [2024-04-23 21:28:06.088919] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088928] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.999 [2024-04-23 21:28:06.088941] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088951] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.088959] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.088965] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.088974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.999 [2024-04-23 21:28:06.088986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:27:11.999 [2024-04-23 21:28:06.088991] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:27:11.999 [2024-04-23 21:28:06.088997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:27:11.999 [2024-04-23 21:28:06.089002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:11.999 [2024-04-23 21:28:06.089007] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.999 [2024-04-23 21:28:06.089138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.999 [2024-04-23 21:28:06.089145] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.999 [2024-04-23 21:28:06.089149] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089153] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.999 [2024-04-23 21:28:06.089161] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:11.999 [2024-04-23 21:28:06.089167] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.089176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.089185] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.089195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089201] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.089214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:11.999 [2024-04-23 21:28:06.089225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.999 [2024-04-23 21:28:06.089329] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.999 [2024-04-23 21:28:06.089336] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.999 [2024-04-23 21:28:06.089340] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:11.999 [2024-04-23 21:28:06.089398] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.089411] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:11.999 [2024-04-23 21:28:06.089420] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:11.999 [2024-04-23 21:28:06.089434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.999 [2024-04-23 21:28:06.089445] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:11.999 [2024-04-23 21:28:06.089690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:11.999 [2024-04-23 21:28:06.089697] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:11.999 [2024-04-23 21:28:06.089701] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089706] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:27:11.999 [2024-04-23 21:28:06.089712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:11.999 [2024-04-23 21:28:06.089717] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089863] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.089868] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.134640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:11.999 [2024-04-23 21:28:06.134654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:11.999 [2024-04-23 21:28:06.134658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:11.999 [2024-04-23 21:28:06.134663] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.134687] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:12.000 [2024-04-23 21:28:06.134702] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.134713] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.134724] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.134729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.134741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.134755] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:12.000 [2024-04-23 21:28:06.134888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.000 [2024-04-23 21:28:06.134898] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.000 [2024-04-23 21:28:06.134902] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.134907] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:27:12.000 [2024-04-23 21:28:06.134913] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:12.000 [2024-04-23 21:28:06.134918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.135014] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.135018] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.177655] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.177659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177664] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.177686] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.177697] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.177711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177716] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.177727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.177744] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:12.000 [2024-04-23 21:28:06.177868] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.000 [2024-04-23 21:28:06.177875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.000 [2024-04-23 21:28:06.177879] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177886] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:27:12.000 [2024-04-23 21:28:06.177892] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:12.000 [2024-04-23 21:28:06.177898] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177908] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.177912] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178071] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178078] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178086] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178100] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178139] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:12.000 [2024-04-23 21:28:06.178145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:12.000 [2024-04-23 21:28:06.178152] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:12.000 [2024-04-23 21:28:06.178174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178180] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.178192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.178202] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178212] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.178221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.000 [2024-04-23 21:28:06.178234] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:12.000 [2024-04-23 21:28:06.178240] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:12.000 [2024-04-23 21:28:06.178355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178362] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178367] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178372] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178381] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178389] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178394] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178398] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178408] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178412] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.178420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.178430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:12.000 [2024-04-23 21:28:06.178534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178544] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178549] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178563] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.178570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.178579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:12.000 [2024-04-23 21:28:06.178678] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178685] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178689] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178693] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.178714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.178723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:12.000 [2024-04-23 21:28:06.178958] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.000 [2024-04-23 21:28:06.178965] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.000 [2024-04-23 21:28:06.178968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178973] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:12.000 [2024-04-23 21:28:06.178994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.178999] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.179010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.179019] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.179024] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.179032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.179041] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.179046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:27:12.000 [2024-04-23 21:28:06.179056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.000 [2024-04-23 21:28:06.179066] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.000 [2024-04-23 21:28:06.179071] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:27:12.001 [2024-04-23 21:28:06.179081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.001 [2024-04-23 21:28:06.179093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:27:12.001 [2024-04-23 21:28:06.179100] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:27:12.001 [2024-04-23 21:28:06.179105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:27:12.001 [2024-04-23 21:28:06.179114] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:27:12.001 [2024-04-23 21:28:06.179280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.001 [2024-04-23 21:28:06.179288] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.001 [2024-04-23 21:28:06.179292] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179297] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:27:12.001 [2024-04-23 21:28:06.179303] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:27:12.001 [2024-04-23 21:28:06.179309] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179653] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179658] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179666] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.001 [2024-04-23 21:28:06.179672] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.001 [2024-04-23 21:28:06.179676] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179680] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:27:12.001 [2024-04-23 21:28:06.179686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:27:12.001 [2024-04-23 21:28:06.179693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179700] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179704] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179712] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.001 [2024-04-23 21:28:06.179719] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.001 [2024-04-23 21:28:06.179723] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179727] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:27:12.001 [2024-04-23 21:28:06.179732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:27:12.001 [2024-04-23 21:28:06.179737] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179744] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179747] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:12.001 [2024-04-23 21:28:06.179760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:12.001 [2024-04-23 21:28:06.179764] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179769] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:27:12.001 [2024-04-23 21:28:06.179775] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:27:12.001 [2024-04-23 21:28:06.179779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179786] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179790] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.001 [2024-04-23 21:28:06.179866] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.001 [2024-04-23 21:28:06.179871] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179876] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:27:12.001 [2024-04-23 21:28:06.179893] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.001 [2024-04-23 21:28:06.179899] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.001 [2024-04-23 21:28:06.179903] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179907] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:27:12.001 [2024-04-23 21:28:06.179919] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.001 [2024-04-23 21:28:06.179925] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.001 [2024-04-23 21:28:06.179929] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179933] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:27:12.001 [2024-04-23 21:28:06.179943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.001 [2024-04-23 21:28:06.179949] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.001 [2024-04-23 21:28:06.179953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.001 [2024-04-23 21:28:06.179958] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:27:12.001 ===================================================== 00:27:12.001 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.001 ===================================================== 00:27:12.001 Controller Capabilities/Features 00:27:12.001 ================================ 00:27:12.001 Vendor ID: 8086 00:27:12.001 Subsystem Vendor ID: 8086 00:27:12.001 Serial Number: SPDK00000000000001 00:27:12.001 Model Number: SPDK bdev Controller 00:27:12.001 Firmware Version: 24.05 00:27:12.001 Recommended Arb Burst: 6 00:27:12.001 IEEE OUI Identifier: e4 d2 5c 00:27:12.001 Multi-path I/O 00:27:12.001 May have multiple subsystem ports: Yes 00:27:12.001 May have multiple controllers: Yes 00:27:12.001 Associated with SR-IOV VF: No 00:27:12.001 Max Data Transfer Size: 131072 00:27:12.001 Max Number of Namespaces: 32 00:27:12.001 Max Number of I/O Queues: 127 00:27:12.001 NVMe Specification Version (VS): 1.3 00:27:12.001 NVMe Specification Version (Identify): 1.3 00:27:12.001 Maximum Queue Entries: 128 00:27:12.001 Contiguous Queues Required: Yes 00:27:12.001 Arbitration Mechanisms Supported 00:27:12.001 Weighted Round Robin: Not Supported 00:27:12.001 Vendor Specific: Not Supported 00:27:12.001 Reset Timeout: 15000 ms 00:27:12.001 Doorbell Stride: 4 bytes 00:27:12.001 NVM Subsystem Reset: Not Supported 00:27:12.001 Command Sets Supported 00:27:12.001 NVM Command Set: Supported 00:27:12.001 Boot Partition: Not Supported 00:27:12.001 Memory Page Size Minimum: 4096 bytes 00:27:12.001 Memory Page Size Maximum: 4096 bytes 00:27:12.001 Persistent Memory Region: Not Supported 00:27:12.001 Optional Asynchronous Events Supported 00:27:12.001 Namespace Attribute Notices: Supported 00:27:12.001 Firmware Activation Notices: Not Supported 00:27:12.001 ANA Change Notices: Not Supported 00:27:12.001 PLE Aggregate Log Change Notices: Not Supported 00:27:12.001 LBA Status Info Alert Notices: Not Supported 00:27:12.001 EGE Aggregate Log Change Notices: Not Supported 00:27:12.001 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.001 Zone Descriptor Change Notices: Not Supported 00:27:12.001 Discovery Log Change Notices: Not Supported 00:27:12.001 Controller Attributes 00:27:12.001 128-bit Host Identifier: Supported 00:27:12.001 Non-Operational Permissive Mode: Not Supported 00:27:12.001 NVM Sets: Not Supported 00:27:12.001 Read Recovery Levels: Not Supported 00:27:12.001 Endurance Groups: Not Supported 00:27:12.001 Predictable Latency Mode: Not Supported 00:27:12.001 Traffic Based Keep ALive: Not Supported 00:27:12.001 Namespace Granularity: Not Supported 00:27:12.001 SQ Associations: Not Supported 00:27:12.001 UUID List: Not Supported 00:27:12.001 Multi-Domain Subsystem: Not Supported 00:27:12.001 Fixed Capacity Management: Not Supported 00:27:12.001 Variable Capacity Management: Not Supported 00:27:12.001 Delete Endurance Group: Not Supported 00:27:12.001 Delete NVM Set: Not Supported 00:27:12.001 Extended LBA Formats Supported: Not Supported 00:27:12.001 Flexible Data Placement Supported: Not Supported 00:27:12.001 00:27:12.001 Controller Memory Buffer Support 00:27:12.001 ================================ 00:27:12.001 Supported: No 00:27:12.001 00:27:12.001 Persistent Memory Region Support 00:27:12.001 ================================ 00:27:12.001 Supported: No 00:27:12.001 00:27:12.001 Admin Command Set Attributes 00:27:12.001 ============================ 00:27:12.001 Security Send/Receive: Not Supported 00:27:12.001 Format NVM: Not Supported 00:27:12.001 Firmware Activate/Download: Not Supported 00:27:12.001 Namespace Management: Not Supported 00:27:12.001 Device Self-Test: Not Supported 00:27:12.001 Directives: Not Supported 00:27:12.001 NVMe-MI: Not Supported 00:27:12.001 Virtualization Management: Not Supported 00:27:12.001 Doorbell Buffer Config: Not Supported 00:27:12.001 Get LBA Status Capability: Not Supported 00:27:12.001 Command & Feature Lockdown Capability: Not Supported 00:27:12.001 Abort Command Limit: 4 00:27:12.001 Async Event Request Limit: 4 00:27:12.001 Number of Firmware Slots: N/A 00:27:12.001 Firmware Slot 1 Read-Only: N/A 00:27:12.001 Firmware Activation Without Reset: N/A 00:27:12.001 Multiple Update Detection Support: N/A 00:27:12.001 Firmware Update Granularity: No Information Provided 00:27:12.001 Per-Namespace SMART Log: No 00:27:12.001 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.001 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:12.001 Command Effects Log Page: Supported 00:27:12.002 Get Log Page Extended Data: Supported 00:27:12.002 Telemetry Log Pages: Not Supported 00:27:12.002 Persistent Event Log Pages: Not Supported 00:27:12.002 Supported Log Pages Log Page: May Support 00:27:12.002 Commands Supported & Effects Log Page: Not Supported 00:27:12.002 Feature Identifiers & Effects Log Page:May Support 00:27:12.002 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.002 Data Area 4 for Telemetry Log: Not Supported 00:27:12.002 Error Log Page Entries Supported: 128 00:27:12.002 Keep Alive: Supported 00:27:12.002 Keep Alive Granularity: 10000 ms 00:27:12.002 00:27:12.002 NVM Command Set Attributes 00:27:12.002 ========================== 00:27:12.002 Submission Queue Entry Size 00:27:12.002 Max: 64 00:27:12.002 Min: 64 00:27:12.002 Completion Queue Entry Size 00:27:12.002 Max: 16 00:27:12.002 Min: 16 00:27:12.002 Number of Namespaces: 32 00:27:12.002 Compare Command: Supported 00:27:12.002 Write Uncorrectable Command: Not Supported 00:27:12.002 Dataset Management Command: Supported 00:27:12.002 Write Zeroes Command: Supported 00:27:12.002 Set Features Save Field: Not Supported 00:27:12.002 Reservations: Supported 00:27:12.002 Timestamp: Not Supported 00:27:12.002 Copy: Supported 00:27:12.002 Volatile Write Cache: Present 00:27:12.002 Atomic Write Unit (Normal): 1 00:27:12.002 Atomic Write Unit (PFail): 1 00:27:12.002 Atomic Compare & Write Unit: 1 00:27:12.002 Fused Compare & Write: Supported 00:27:12.002 Scatter-Gather List 00:27:12.002 SGL Command Set: Supported 00:27:12.002 SGL Keyed: Supported 00:27:12.002 SGL Bit Bucket Descriptor: Not Supported 00:27:12.002 SGL Metadata Pointer: Not Supported 00:27:12.002 Oversized SGL: Not Supported 00:27:12.002 SGL Metadata Address: Not Supported 00:27:12.002 SGL Offset: Supported 00:27:12.002 Transport SGL Data Block: Not Supported 00:27:12.002 Replay Protected Memory Block: Not Supported 00:27:12.002 00:27:12.002 Firmware Slot Information 00:27:12.002 ========================= 00:27:12.002 Active slot: 1 00:27:12.002 Slot 1 Firmware Revision: 24.05 00:27:12.002 00:27:12.002 00:27:12.002 Commands Supported and Effects 00:27:12.002 ============================== 00:27:12.002 Admin Commands 00:27:12.002 -------------- 00:27:12.002 Get Log Page (02h): Supported 00:27:12.002 Identify (06h): Supported 00:27:12.002 Abort (08h): Supported 00:27:12.002 Set Features (09h): Supported 00:27:12.002 Get Features (0Ah): Supported 00:27:12.002 Asynchronous Event Request (0Ch): Supported 00:27:12.002 Keep Alive (18h): Supported 00:27:12.002 I/O Commands 00:27:12.002 ------------ 00:27:12.002 Flush (00h): Supported LBA-Change 00:27:12.002 Write (01h): Supported LBA-Change 00:27:12.002 Read (02h): Supported 00:27:12.002 Compare (05h): Supported 00:27:12.002 Write Zeroes (08h): Supported LBA-Change 00:27:12.002 Dataset Management (09h): Supported LBA-Change 00:27:12.002 Copy (19h): Supported LBA-Change 00:27:12.002 Unknown (79h): Supported LBA-Change 00:27:12.002 Unknown (7Ah): Supported 00:27:12.002 00:27:12.002 Error Log 00:27:12.002 ========= 00:27:12.002 00:27:12.002 Arbitration 00:27:12.002 =========== 00:27:12.002 Arbitration Burst: 1 00:27:12.002 00:27:12.002 Power Management 00:27:12.002 ================ 00:27:12.002 Number of Power States: 1 00:27:12.002 Current Power State: Power State #0 00:27:12.002 Power State #0: 00:27:12.002 Max Power: 0.00 W 00:27:12.002 Non-Operational State: Operational 00:27:12.002 Entry Latency: Not Reported 00:27:12.002 Exit Latency: Not Reported 00:27:12.002 Relative Read Throughput: 0 00:27:12.002 Relative Read Latency: 0 00:27:12.002 Relative Write Throughput: 0 00:27:12.002 Relative Write Latency: 0 00:27:12.002 Idle Power: Not Reported 00:27:12.002 Active Power: Not Reported 00:27:12.002 Non-Operational Permissive Mode: Not Supported 00:27:12.002 00:27:12.002 Health Information 00:27:12.002 ================== 00:27:12.002 Critical Warnings: 00:27:12.002 Available Spare Space: OK 00:27:12.002 Temperature: OK 00:27:12.002 Device Reliability: OK 00:27:12.002 Read Only: No 00:27:12.002 Volatile Memory Backup: OK 00:27:12.002 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:12.002 Temperature Threshold: [2024-04-23 21:28:06.180089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.180095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:27:12.002 [2024-04-23 21:28:06.180109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.002 [2024-04-23 21:28:06.180123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:27:12.002 [2024-04-23 21:28:06.180229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.002 [2024-04-23 21:28:06.180237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.002 [2024-04-23 21:28:06.180241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.180246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:27:12.002 [2024-04-23 21:28:06.180284] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:12.002 [2024-04-23 21:28:06.180297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.002 [2024-04-23 21:28:06.180305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.002 [2024-04-23 21:28:06.180312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.002 [2024-04-23 21:28:06.180318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.002 [2024-04-23 21:28:06.180328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.180335] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.180342] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.002 [2024-04-23 21:28:06.180354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.002 [2024-04-23 21:28:06.180368] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.002 [2024-04-23 21:28:06.183636] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.002 [2024-04-23 21:28:06.183647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.002 [2024-04-23 21:28:06.183653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.183658] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.002 [2024-04-23 21:28:06.183670] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.183675] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.002 [2024-04-23 21:28:06.183682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.002 [2024-04-23 21:28:06.183692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.002 [2024-04-23 21:28:06.183709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.002 [2024-04-23 21:28:06.183888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.002 [2024-04-23 21:28:06.183895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.002 [2024-04-23 21:28:06.183899] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.183903] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.183909] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:12.003 [2024-04-23 21:28:06.183916] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:12.003 [2024-04-23 21:28:06.183927] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.183932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.183938] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.183946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.183960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184118] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.184171] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184275] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184282] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184285] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184292] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184302] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.184330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184490] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184496] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184500] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184504] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.184541] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184664] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184683] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.184700] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184791] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184797] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184820] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184824] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184828] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.184847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.184949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.184956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.184959] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.184976] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184981] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.184985] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.184993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.185002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.185161] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.185167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.185171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185176] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.185185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185194] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.185202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.185211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.185313] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.185319] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.185323] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185327] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.185337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.185354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.185364] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.185456] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.185463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.185467] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185472] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.185482] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185487] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185492] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.185500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.185509] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.185601] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.185609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.185613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185619] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.185633] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185638] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185642] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.003 [2024-04-23 21:28:06.185650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.003 [2024-04-23 21:28:06.185660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.003 [2024-04-23 21:28:06.185754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.003 [2024-04-23 21:28:06.185762] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.003 [2024-04-23 21:28:06.185766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185771] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.003 [2024-04-23 21:28:06.185781] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185785] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.003 [2024-04-23 21:28:06.185790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.185798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.185807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.185971] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.185977] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.185981] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.185985] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.185995] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186124] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186128] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.186138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186325] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186331] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186335] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186340] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.186350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186359] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186376] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186537] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.186562] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186566] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186570] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186588] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186688] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186695] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186698] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186702] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.186712] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186716] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186721] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186832] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186838] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186842] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.186856] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.186872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.186881] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.186977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.186983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.186987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.186993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.187019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.187028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.187136] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.187142] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.187146] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187150] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187161] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.187182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.187191] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.187284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.187290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.187294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187309] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.187325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.187335] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.187423] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.187430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.187434] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187438] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187448] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187457] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.187467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.187476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.187569] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.187576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.187579] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187586] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187596] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187601] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187605] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.004 [2024-04-23 21:28:06.187613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.004 [2024-04-23 21:28:06.187623] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.004 [2024-04-23 21:28:06.187752] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.004 [2024-04-23 21:28:06.187758] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.004 [2024-04-23 21:28:06.187763] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.004 [2024-04-23 21:28:06.187767] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.004 [2024-04-23 21:28:06.187777] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.187782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.187786] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.187793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.187803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.187933] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.187940] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.187943] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.187948] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.187958] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.187962] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.187967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.187975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.187985] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.188084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.188090] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.188095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188099] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.188109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188118] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.188126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.188135] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.188260] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.188267] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.188271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188276] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.188286] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188290] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188295] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.188302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.188312] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.188440] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.188446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.188450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188454] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.188464] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188469] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188473] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.188481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.188491] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.188582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.188589] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.188593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.188607] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.188616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:27:12.005 [2024-04-23 21:28:06.188623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.005 [2024-04-23 21:28:06.192642] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:27:12.005 [2024-04-23 21:28:06.192773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:12.005 [2024-04-23 21:28:06.192780] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:12.005 [2024-04-23 21:28:06.192784] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:12.005 [2024-04-23 21:28:06.192789] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:27:12.005 [2024-04-23 21:28:06.192797] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:27:12.005 0 Kelvin (-273 Celsius) 00:27:12.005 Available Spare: 0% 00:27:12.005 Available Spare Threshold: 0% 00:27:12.005 Life Percentage Used: 0% 00:27:12.005 Data Units Read: 0 00:27:12.005 Data Units Written: 0 00:27:12.005 Host Read Commands: 0 00:27:12.005 Host Write Commands: 0 00:27:12.005 Controller Busy Time: 0 minutes 00:27:12.005 Power Cycles: 0 00:27:12.005 Power On Hours: 0 hours 00:27:12.005 Unsafe Shutdowns: 0 00:27:12.005 Unrecoverable Media Errors: 0 00:27:12.005 Lifetime Error Log Entries: 0 00:27:12.005 Warning Temperature Time: 0 minutes 00:27:12.005 Critical Temperature Time: 0 minutes 00:27:12.005 00:27:12.005 Number of Queues 00:27:12.005 ================ 00:27:12.005 Number of I/O Submission Queues: 127 00:27:12.005 Number of I/O Completion Queues: 127 00:27:12.005 00:27:12.005 Active Namespaces 00:27:12.005 ================= 00:27:12.005 Namespace ID:1 00:27:12.005 Error Recovery Timeout: Unlimited 00:27:12.005 Command Set Identifier: NVM (00h) 00:27:12.005 Deallocate: Supported 00:27:12.005 Deallocated/Unwritten Error: Not Supported 00:27:12.005 Deallocated Read Value: Unknown 00:27:12.005 Deallocate in Write Zeroes: Not Supported 00:27:12.005 Deallocated Guard Field: 0xFFFF 00:27:12.005 Flush: Supported 00:27:12.005 Reservation: Supported 00:27:12.005 Namespace Sharing Capabilities: Multiple Controllers 00:27:12.005 Size (in LBAs): 131072 (0GiB) 00:27:12.005 Capacity (in LBAs): 131072 (0GiB) 00:27:12.005 Utilization (in LBAs): 131072 (0GiB) 00:27:12.005 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:12.005 EUI64: ABCDEF0123456789 00:27:12.005 UUID: f9e3a7c7-ca91-408c-bf2c-f35fa96f3539 00:27:12.005 Thin Provisioning: Not Supported 00:27:12.005 Per-NS Atomic Units: Yes 00:27:12.005 Atomic Boundary Size (Normal): 0 00:27:12.005 Atomic Boundary Size (PFail): 0 00:27:12.005 Atomic Boundary Offset: 0 00:27:12.005 Maximum Single Source Range Length: 65535 00:27:12.005 Maximum Copy Length: 65535 00:27:12.005 Maximum Source Range Count: 1 00:27:12.005 NGUID/EUI64 Never Reused: No 00:27:12.005 Namespace Write Protected: No 00:27:12.005 Number of LBA Formats: 1 00:27:12.005 Current LBA Format: LBA Format #00 00:27:12.005 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:12.005 00:27:12.005 21:28:06 -- host/identify.sh@51 -- # sync 00:27:12.005 21:28:06 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.005 21:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.005 21:28:06 -- common/autotest_common.sh@10 -- # set +x 00:27:12.005 21:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.005 21:28:06 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:12.005 21:28:06 -- host/identify.sh@56 -- # nvmftestfini 00:27:12.005 21:28:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:12.005 21:28:06 -- nvmf/common.sh@117 -- # sync 00:27:12.005 21:28:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.005 21:28:06 -- nvmf/common.sh@120 -- # set +e 00:27:12.005 21:28:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.005 21:28:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.005 rmmod nvme_tcp 00:27:12.005 rmmod nvme_fabrics 00:27:12.270 rmmod nvme_keyring 00:27:12.270 21:28:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.270 21:28:06 -- nvmf/common.sh@124 -- # set -e 00:27:12.270 21:28:06 -- nvmf/common.sh@125 -- # return 0 00:27:12.270 21:28:06 -- nvmf/common.sh@478 -- # '[' -n 1575516 ']' 00:27:12.270 21:28:06 -- nvmf/common.sh@479 -- # killprocess 1575516 00:27:12.270 21:28:06 -- common/autotest_common.sh@936 -- # '[' -z 1575516 ']' 00:27:12.270 21:28:06 -- common/autotest_common.sh@940 -- # kill -0 1575516 00:27:12.270 21:28:06 -- common/autotest_common.sh@941 -- # uname 00:27:12.270 21:28:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.270 21:28:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1575516 00:27:12.270 21:28:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:12.270 21:28:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:12.270 21:28:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1575516' 00:27:12.270 killing process with pid 1575516 00:27:12.270 21:28:06 -- common/autotest_common.sh@955 -- # kill 1575516 00:27:12.270 [2024-04-23 21:28:06.335863] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:12.270 21:28:06 -- common/autotest_common.sh@960 -- # wait 1575516 00:27:12.838 21:28:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:12.838 21:28:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:12.838 21:28:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:12.838 21:28:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.838 21:28:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.838 21:28:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.838 21:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.838 21:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.742 21:28:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.742 00:27:14.742 real 0m9.706s 00:27:14.742 user 0m8.325s 00:27:14.742 sys 0m4.552s 00:27:14.742 21:28:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:14.742 21:28:08 -- common/autotest_common.sh@10 -- # set +x 00:27:14.742 ************************************ 00:27:14.742 END TEST nvmf_identify 00:27:14.742 ************************************ 00:27:14.742 21:28:08 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:14.742 21:28:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:14.742 21:28:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:14.742 21:28:08 -- common/autotest_common.sh@10 -- # set +x 00:27:15.002 ************************************ 00:27:15.002 START TEST nvmf_perf 00:27:15.002 ************************************ 00:27:15.002 21:28:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:15.002 * Looking for test storage... 00:27:15.002 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:27:15.002 21:28:09 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.002 21:28:09 -- nvmf/common.sh@7 -- # uname -s 00:27:15.002 21:28:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.002 21:28:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.002 21:28:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.002 21:28:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.002 21:28:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.002 21:28:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.002 21:28:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.002 21:28:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.002 21:28:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.002 21:28:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.002 21:28:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:15.002 21:28:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:27:15.002 21:28:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.002 21:28:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.002 21:28:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:15.002 21:28:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.003 21:28:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:15.003 21:28:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.003 21:28:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.003 21:28:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.003 21:28:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.003 21:28:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.003 21:28:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.003 21:28:09 -- paths/export.sh@5 -- # export PATH 00:27:15.003 21:28:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.003 21:28:09 -- nvmf/common.sh@47 -- # : 0 00:27:15.003 21:28:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.003 21:28:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.003 21:28:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.003 21:28:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.003 21:28:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.003 21:28:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.003 21:28:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.003 21:28:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.003 21:28:09 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:15.003 21:28:09 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:15.003 21:28:09 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:27:15.003 21:28:09 -- host/perf.sh@17 -- # nvmftestinit 00:27:15.003 21:28:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:15.003 21:28:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.003 21:28:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:15.003 21:28:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:15.003 21:28:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:15.003 21:28:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.003 21:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.003 21:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.003 21:28:09 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:27:15.003 21:28:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:15.003 21:28:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.003 21:28:09 -- common/autotest_common.sh@10 -- # set +x 00:27:20.280 21:28:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:20.280 21:28:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.280 21:28:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.280 21:28:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.280 21:28:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.280 21:28:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.280 21:28:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.280 21:28:14 -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.280 21:28:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.280 21:28:14 -- nvmf/common.sh@296 -- # e810=() 00:27:20.280 21:28:14 -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.280 21:28:14 -- nvmf/common.sh@297 -- # x722=() 00:27:20.280 21:28:14 -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.280 21:28:14 -- nvmf/common.sh@298 -- # mlx=() 00:27:20.280 21:28:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.280 21:28:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.280 21:28:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.280 21:28:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.280 21:28:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:20.280 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:20.280 21:28:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.280 21:28:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:20.280 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:20.280 21:28:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.280 21:28:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.280 21:28:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.280 21:28:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:20.280 Found net devices under 0000:27:00.0: cvl_0_0 00:27:20.280 21:28:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.280 21:28:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.280 21:28:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.280 21:28:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.280 21:28:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:20.280 Found net devices under 0000:27:00.1: cvl_0_1 00:27:20.280 21:28:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.280 21:28:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:20.280 21:28:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:20.280 21:28:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:20.280 21:28:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.280 21:28:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.280 21:28:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.280 21:28:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.280 21:28:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.280 21:28:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.280 21:28:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.280 21:28:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.280 21:28:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.280 21:28:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.280 21:28:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.280 21:28:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.280 21:28:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.539 21:28:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.539 21:28:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.539 21:28:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.539 21:28:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.539 21:28:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.539 21:28:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.539 21:28:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:27:20.539 00:27:20.539 --- 10.0.0.2 ping statistics --- 00:27:20.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.539 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:27:20.539 21:28:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.514 ms 00:27:20.539 00:27:20.539 --- 10.0.0.1 ping statistics --- 00:27:20.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.539 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:27:20.539 21:28:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.539 21:28:14 -- nvmf/common.sh@411 -- # return 0 00:27:20.539 21:28:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:20.539 21:28:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.539 21:28:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:20.539 21:28:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:20.539 21:28:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.539 21:28:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:20.539 21:28:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:20.797 21:28:14 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:20.797 21:28:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:20.797 21:28:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:20.797 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:27:20.797 21:28:14 -- nvmf/common.sh@470 -- # nvmfpid=1580000 00:27:20.797 21:28:14 -- nvmf/common.sh@471 -- # waitforlisten 1580000 00:27:20.797 21:28:14 -- common/autotest_common.sh@817 -- # '[' -z 1580000 ']' 00:27:20.797 21:28:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.797 21:28:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:20.797 21:28:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.797 21:28:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:20.797 21:28:14 -- common/autotest_common.sh@10 -- # set +x 00:27:20.797 21:28:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:20.797 [2024-04-23 21:28:14.904034] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:27:20.797 [2024-04-23 21:28:14.904134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.797 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.798 [2024-04-23 21:28:15.022568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.057 [2024-04-23 21:28:15.121543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.057 [2024-04-23 21:28:15.121578] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.057 [2024-04-23 21:28:15.121589] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.057 [2024-04-23 21:28:15.121598] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.057 [2024-04-23 21:28:15.121605] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.057 [2024-04-23 21:28:15.121756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.057 [2024-04-23 21:28:15.121852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.057 [2024-04-23 21:28:15.121954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.057 [2024-04-23 21:28:15.121965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.627 21:28:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:21.627 21:28:15 -- common/autotest_common.sh@850 -- # return 0 00:27:21.627 21:28:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:21.627 21:28:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:21.627 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:27:21.627 21:28:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.627 21:28:15 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:21.627 21:28:15 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:22.564 21:28:16 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:22.564 21:28:16 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:22.564 21:28:16 -- host/perf.sh@30 -- # local_nvme_trid=0000:03:00.0 00:27:22.564 21:28:16 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:22.564 21:28:16 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:22.564 21:28:16 -- host/perf.sh@33 -- # '[' -n 0000:03:00.0 ']' 00:27:22.564 21:28:16 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:22.564 21:28:16 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:22.564 21:28:16 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:22.822 [2024-04-23 21:28:16.936390] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.822 21:28:16 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:23.081 21:28:17 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:23.081 21:28:17 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:23.081 21:28:17 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:23.081 21:28:17 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:23.340 21:28:17 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.340 [2024-04-23 21:28:17.493503] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.340 21:28:17 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:23.599 21:28:17 -- host/perf.sh@52 -- # '[' -n 0000:03:00.0 ']' 00:27:23.599 21:28:17 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:27:23.599 21:28:17 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:23.599 21:28:17 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:03:00.0' 00:27:24.978 Initializing NVMe Controllers 00:27:24.978 Attached to NVMe Controller at 0000:03:00.0 [1344:51c3] 00:27:24.978 Associating PCIE (0000:03:00.0) NSID 1 with lcore 0 00:27:24.978 Initialization complete. Launching workers. 00:27:24.978 ======================================================== 00:27:24.978 Latency(us) 00:27:24.978 Device Information : IOPS MiB/s Average min max 00:27:24.978 PCIE (0000:03:00.0) NSID 1 from core 0: 90748.38 354.49 352.16 72.51 4810.12 00:27:24.978 ======================================================== 00:27:24.978 Total : 90748.38 354.49 352.16 72.51 4810.12 00:27:24.978 00:27:24.978 21:28:19 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.978 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.359 Initializing NVMe Controllers 00:27:26.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:26.359 Initialization complete. Launching workers. 00:27:26.359 ======================================================== 00:27:26.359 Latency(us) 00:27:26.359 Device Information : IOPS MiB/s Average min max 00:27:26.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.90 0.39 10110.55 180.81 49310.58 00:27:26.359 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.95 0.20 19249.15 7976.86 56003.05 00:27:26.359 ======================================================== 00:27:26.359 Total : 150.85 0.59 13257.62 180.81 56003.05 00:27:26.359 00:27:26.359 21:28:20 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.359 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.762 Initializing NVMe Controllers 00:27:27.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:27.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:27.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:27.762 Initialization complete. Launching workers. 00:27:27.762 ======================================================== 00:27:27.762 Latency(us) 00:27:27.762 Device Information : IOPS MiB/s Average min max 00:27:27.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10667.48 41.67 3000.01 369.13 8915.51 00:27:27.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3853.09 15.05 8353.31 5237.72 17726.33 00:27:27.762 ======================================================== 00:27:27.762 Total : 14520.57 56.72 4420.53 369.13 17726.33 00:27:27.762 00:27:27.762 21:28:21 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:27:27.762 21:28:21 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:27.762 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.305 Initializing NVMe Controllers 00:27:30.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.306 Controller IO queue size 128, less than required. 00:27:30.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.306 Controller IO queue size 128, less than required. 00:27:30.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:30.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:30.306 Initialization complete. Launching workers. 00:27:30.306 ======================================================== 00:27:30.306 Latency(us) 00:27:30.306 Device Information : IOPS MiB/s Average min max 00:27:30.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1161.99 290.50 113552.54 55745.01 208151.01 00:27:30.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.49 147.87 223207.46 87058.64 288671.03 00:27:30.306 ======================================================== 00:27:30.306 Total : 1753.48 438.37 150541.92 55745.01 288671.03 00:27:30.306 00:27:30.306 21:28:24 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:30.306 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.564 No valid NVMe controllers or AIO or URING devices found 00:27:30.564 Initializing NVMe Controllers 00:27:30.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.564 Controller IO queue size 128, less than required. 00:27:30.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.564 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:30.564 Controller IO queue size 128, less than required. 00:27:30.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.564 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:30.564 WARNING: Some requested NVMe devices were skipped 00:27:30.564 21:28:24 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:30.564 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.855 Initializing NVMe Controllers 00:27:33.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.855 Controller IO queue size 128, less than required. 00:27:33.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.855 Controller IO queue size 128, less than required. 00:27:33.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:33.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:33.855 Initialization complete. Launching workers. 00:27:33.855 00:27:33.855 ==================== 00:27:33.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:33.855 TCP transport: 00:27:33.855 polls: 31525 00:27:33.855 idle_polls: 16930 00:27:33.855 sock_completions: 14595 00:27:33.855 nvme_completions: 5317 00:27:33.855 submitted_requests: 7964 00:27:33.855 queued_requests: 1 00:27:33.855 00:27:33.855 ==================== 00:27:33.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:33.855 TCP transport: 00:27:33.855 polls: 35968 00:27:33.855 idle_polls: 14372 00:27:33.855 sock_completions: 21596 00:27:33.855 nvme_completions: 4205 00:27:33.855 submitted_requests: 6294 00:27:33.855 queued_requests: 1 00:27:33.855 ======================================================== 00:27:33.855 Latency(us) 00:27:33.855 Device Information : IOPS MiB/s Average min max 00:27:33.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1328.57 332.14 99698.57 49792.12 187773.21 00:27:33.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1050.66 262.67 123737.23 56246.85 191795.54 00:27:33.855 ======================================================== 00:27:33.855 Total : 2379.24 594.81 110313.96 49792.12 191795.54 00:27:33.855 00:27:33.855 21:28:27 -- host/perf.sh@66 -- # sync 00:27:33.855 21:28:27 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.855 21:28:27 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:27:33.855 21:28:27 -- host/perf.sh@71 -- # '[' -n 0000:03:00.0 ']' 00:27:33.855 21:28:27 -- host/perf.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:27:34.115 21:28:28 -- host/perf.sh@72 -- # ls_guid=420028fd-4b2e-4b09-b76f-9fbe5918582d 00:27:34.115 21:28:28 -- host/perf.sh@73 -- # get_lvs_free_mb 420028fd-4b2e-4b09-b76f-9fbe5918582d 00:27:34.115 21:28:28 -- common/autotest_common.sh@1350 -- # local lvs_uuid=420028fd-4b2e-4b09-b76f-9fbe5918582d 00:27:34.115 21:28:28 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:34.115 21:28:28 -- common/autotest_common.sh@1352 -- # local fc 00:27:34.115 21:28:28 -- common/autotest_common.sh@1353 -- # local cs 00:27:34.115 21:28:28 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:34.374 21:28:28 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:34.374 { 00:27:34.374 "uuid": "420028fd-4b2e-4b09-b76f-9fbe5918582d", 00:27:34.374 "name": "lvs_0", 00:27:34.374 "base_bdev": "Nvme0n1", 00:27:34.374 "total_data_clusters": 228704, 00:27:34.374 "free_clusters": 228704, 00:27:34.374 "block_size": 512, 00:27:34.374 "cluster_size": 4194304 00:27:34.374 } 00:27:34.374 ]' 00:27:34.375 21:28:28 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="420028fd-4b2e-4b09-b76f-9fbe5918582d") .free_clusters' 00:27:34.375 21:28:28 -- common/autotest_common.sh@1355 -- # fc=228704 00:27:34.375 21:28:28 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="420028fd-4b2e-4b09-b76f-9fbe5918582d") .cluster_size' 00:27:34.375 21:28:28 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:34.375 21:28:28 -- common/autotest_common.sh@1359 -- # free_mb=914816 00:27:34.375 21:28:28 -- common/autotest_common.sh@1360 -- # echo 914816 00:27:34.375 914816 00:27:34.375 21:28:28 -- host/perf.sh@77 -- # '[' 914816 -gt 20480 ']' 00:27:34.375 21:28:28 -- host/perf.sh@78 -- # free_mb=20480 00:27:34.375 21:28:28 -- host/perf.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 420028fd-4b2e-4b09-b76f-9fbe5918582d lbd_0 20480 00:27:34.633 21:28:28 -- host/perf.sh@80 -- # lb_guid=8e0e063d-e2b2-4eed-a819-e4e64afea5c8 00:27:34.633 21:28:28 -- host/perf.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8e0e063d-e2b2-4eed-a819-e4e64afea5c8 lvs_n_0 00:27:35.199 21:28:29 -- host/perf.sh@83 -- # ls_nested_guid=d44392fa-7f0d-4cec-be8d-898ef30a7a2d 00:27:35.199 21:28:29 -- host/perf.sh@84 -- # get_lvs_free_mb d44392fa-7f0d-4cec-be8d-898ef30a7a2d 00:27:35.199 21:28:29 -- common/autotest_common.sh@1350 -- # local lvs_uuid=d44392fa-7f0d-4cec-be8d-898ef30a7a2d 00:27:35.199 21:28:29 -- common/autotest_common.sh@1351 -- # local lvs_info 00:27:35.199 21:28:29 -- common/autotest_common.sh@1352 -- # local fc 00:27:35.199 21:28:29 -- common/autotest_common.sh@1353 -- # local cs 00:27:35.199 21:28:29 -- common/autotest_common.sh@1354 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:35.199 21:28:29 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:27:35.199 { 00:27:35.199 "uuid": "420028fd-4b2e-4b09-b76f-9fbe5918582d", 00:27:35.199 "name": "lvs_0", 00:27:35.199 "base_bdev": "Nvme0n1", 00:27:35.199 "total_data_clusters": 228704, 00:27:35.199 "free_clusters": 223584, 00:27:35.199 "block_size": 512, 00:27:35.199 "cluster_size": 4194304 00:27:35.199 }, 00:27:35.199 { 00:27:35.199 "uuid": "d44392fa-7f0d-4cec-be8d-898ef30a7a2d", 00:27:35.199 "name": "lvs_n_0", 00:27:35.199 "base_bdev": "8e0e063d-e2b2-4eed-a819-e4e64afea5c8", 00:27:35.199 "total_data_clusters": 5114, 00:27:35.199 "free_clusters": 5114, 00:27:35.199 "block_size": 512, 00:27:35.199 "cluster_size": 4194304 00:27:35.199 } 00:27:35.199 ]' 00:27:35.199 21:28:29 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="d44392fa-7f0d-4cec-be8d-898ef30a7a2d") .free_clusters' 00:27:35.199 21:28:29 -- common/autotest_common.sh@1355 -- # fc=5114 00:27:35.199 21:28:29 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="d44392fa-7f0d-4cec-be8d-898ef30a7a2d") .cluster_size' 00:27:35.457 21:28:29 -- common/autotest_common.sh@1356 -- # cs=4194304 00:27:35.457 21:28:29 -- common/autotest_common.sh@1359 -- # free_mb=20456 00:27:35.457 21:28:29 -- common/autotest_common.sh@1360 -- # echo 20456 00:27:35.457 20456 00:27:35.457 21:28:29 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:35.457 21:28:29 -- host/perf.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d44392fa-7f0d-4cec-be8d-898ef30a7a2d lbd_nest_0 20456 00:27:35.457 21:28:29 -- host/perf.sh@88 -- # lb_nested_guid=61642643-3002-468e-8d1b-44dec801e51d 00:27:35.457 21:28:29 -- host/perf.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.718 21:28:29 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:35.718 21:28:29 -- host/perf.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 61642643-3002-468e-8d1b-44dec801e51d 00:27:35.718 21:28:29 -- host/perf.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.980 21:28:30 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:35.980 21:28:30 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:35.980 21:28:30 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:35.980 21:28:30 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:35.980 21:28:30 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:35.980 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.201 Initializing NVMe Controllers 00:27:48.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:48.202 Initialization complete. Launching workers. 00:27:48.202 ======================================================== 00:27:48.202 Latency(us) 00:27:48.202 Device Information : IOPS MiB/s Average min max 00:27:48.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.30 0.02 21649.65 202.55 48025.29 00:27:48.202 ======================================================== 00:27:48.202 Total : 46.30 0.02 21649.65 202.55 48025.29 00:27:48.202 00:27:48.202 21:28:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:48.202 21:28:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:48.202 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.203 Initializing NVMe Controllers 00:27:58.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.203 Initialization complete. Launching workers. 00:27:58.203 ======================================================== 00:27:58.203 Latency(us) 00:27:58.203 Device Information : IOPS MiB/s Average min max 00:27:58.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.77 9.47 13218.16 6949.95 51947.79 00:27:58.203 ======================================================== 00:27:58.203 Total : 75.77 9.47 13218.16 6949.95 51947.79 00:27:58.203 00:27:58.203 21:28:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:58.203 21:28:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:58.203 21:28:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:58.203 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.188 Initializing NVMe Controllers 00:28:08.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:08.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:08.188 Initialization complete. Launching workers. 00:28:08.188 ======================================================== 00:28:08.188 Latency(us) 00:28:08.188 Device Information : IOPS MiB/s Average min max 00:28:08.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8677.54 4.24 3688.09 196.32 7962.63 00:28:08.188 ======================================================== 00:28:08.188 Total : 8677.54 4.24 3688.09 196.32 7962.63 00:28:08.188 00:28:08.188 21:29:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:08.188 21:29:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.188 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.176 Initializing NVMe Controllers 00:28:18.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.176 Initialization complete. Launching workers. 00:28:18.176 ======================================================== 00:28:18.176 Latency(us) 00:28:18.176 Device Information : IOPS MiB/s Average min max 00:28:18.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2638.09 329.76 12137.90 886.95 27126.43 00:28:18.176 ======================================================== 00:28:18.176 Total : 2638.09 329.76 12137.90 886.95 27126.43 00:28:18.176 00:28:18.176 21:29:11 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:18.176 21:29:11 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:18.176 21:29:11 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.176 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.168 Initializing NVMe Controllers 00:28:28.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.168 Controller IO queue size 128, less than required. 00:28:28.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.168 Initialization complete. Launching workers. 00:28:28.168 ======================================================== 00:28:28.169 Latency(us) 00:28:28.169 Device Information : IOPS MiB/s Average min max 00:28:28.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15955.90 7.79 8025.66 1354.32 26401.86 00:28:28.169 ======================================================== 00:28:28.169 Total : 15955.90 7.79 8025.66 1354.32 26401.86 00:28:28.169 00:28:28.169 21:29:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:28.169 21:29:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.169 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.149 Initializing NVMe Controllers 00:28:38.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.149 Controller IO queue size 128, less than required. 00:28:38.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.149 Initialization complete. Launching workers. 00:28:38.149 ======================================================== 00:28:38.149 Latency(us) 00:28:38.149 Device Information : IOPS MiB/s Average min max 00:28:38.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1208.00 151.00 106568.83 29084.93 175385.89 00:28:38.149 ======================================================== 00:28:38.149 Total : 1208.00 151.00 106568.83 29084.93 175385.89 00:28:38.149 00:28:38.149 21:29:32 -- host/perf.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.408 21:29:32 -- host/perf.sh@105 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61642643-3002-468e-8d1b-44dec801e51d 00:28:38.975 21:29:33 -- host/perf.sh@106 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:39.235 21:29:33 -- host/perf.sh@107 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e0e063d-e2b2-4eed-a819-e4e64afea5c8 00:28:39.235 21:29:33 -- host/perf.sh@108 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:39.495 21:29:33 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:39.495 21:29:33 -- host/perf.sh@114 -- # nvmftestfini 00:28:39.496 21:29:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:39.496 21:29:33 -- nvmf/common.sh@117 -- # sync 00:28:39.496 21:29:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.496 21:29:33 -- nvmf/common.sh@120 -- # set +e 00:28:39.496 21:29:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.496 21:29:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.496 rmmod nvme_tcp 00:28:39.496 rmmod nvme_fabrics 00:28:39.496 rmmod nvme_keyring 00:28:39.496 21:29:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.496 21:29:33 -- nvmf/common.sh@124 -- # set -e 00:28:39.496 21:29:33 -- nvmf/common.sh@125 -- # return 0 00:28:39.496 21:29:33 -- nvmf/common.sh@478 -- # '[' -n 1580000 ']' 00:28:39.496 21:29:33 -- nvmf/common.sh@479 -- # killprocess 1580000 00:28:39.496 21:29:33 -- common/autotest_common.sh@936 -- # '[' -z 1580000 ']' 00:28:39.496 21:29:33 -- common/autotest_common.sh@940 -- # kill -0 1580000 00:28:39.496 21:29:33 -- common/autotest_common.sh@941 -- # uname 00:28:39.496 21:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.496 21:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1580000 00:28:39.496 21:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:39.496 21:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:39.496 21:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1580000' 00:28:39.496 killing process with pid 1580000 00:28:39.496 21:29:33 -- common/autotest_common.sh@955 -- # kill 1580000 00:28:39.496 21:29:33 -- common/autotest_common.sh@960 -- # wait 1580000 00:28:41.005 21:29:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:41.005 21:29:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:41.005 21:29:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:41.005 21:29:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.005 21:29:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:41.005 21:29:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.005 21:29:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.005 21:29:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.915 21:29:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.915 00:28:42.915 real 1m28.109s 00:28:42.915 user 5m8.859s 00:28:42.915 sys 0m13.045s 00:28:42.915 21:29:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:42.915 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:28:42.915 ************************************ 00:28:42.915 END TEST nvmf_perf 00:28:42.915 ************************************ 00:28:42.915 21:29:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:42.915 21:29:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:42.915 21:29:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:42.915 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:28:43.174 ************************************ 00:28:43.174 START TEST nvmf_fio_host 00:28:43.175 ************************************ 00:28:43.175 21:29:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:43.175 * Looking for test storage... 00:28:43.175 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:28:43.175 21:29:37 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:43.175 21:29:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.175 21:29:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.175 21:29:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.175 21:29:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@5 -- # export PATH 00:28:43.175 21:29:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.175 21:29:37 -- nvmf/common.sh@7 -- # uname -s 00:28:43.175 21:29:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.175 21:29:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.175 21:29:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.175 21:29:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.175 21:29:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.175 21:29:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.175 21:29:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.175 21:29:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.175 21:29:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.175 21:29:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.175 21:29:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:43.175 21:29:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:28:43.175 21:29:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.175 21:29:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.175 21:29:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:43.175 21:29:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.175 21:29:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:43.175 21:29:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.175 21:29:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.175 21:29:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.175 21:29:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- paths/export.sh@5 -- # export PATH 00:28:43.175 21:29:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.175 21:29:37 -- nvmf/common.sh@47 -- # : 0 00:28:43.175 21:29:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.175 21:29:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.175 21:29:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.175 21:29:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.175 21:29:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.175 21:29:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.175 21:29:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.175 21:29:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.175 21:29:37 -- host/fio.sh@12 -- # nvmftestinit 00:28:43.175 21:29:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:43.175 21:29:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.175 21:29:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:43.175 21:29:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:43.175 21:29:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:43.175 21:29:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.175 21:29:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:43.175 21:29:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.175 21:29:37 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:28:43.175 21:29:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:43.175 21:29:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.175 21:29:37 -- common/autotest_common.sh@10 -- # set +x 00:28:48.460 21:29:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:48.460 21:29:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:48.460 21:29:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:48.460 21:29:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:48.460 21:29:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:48.460 21:29:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:48.460 21:29:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:48.460 21:29:42 -- nvmf/common.sh@295 -- # net_devs=() 00:28:48.460 21:29:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:48.460 21:29:42 -- nvmf/common.sh@296 -- # e810=() 00:28:48.460 21:29:42 -- nvmf/common.sh@296 -- # local -ga e810 00:28:48.460 21:29:42 -- nvmf/common.sh@297 -- # x722=() 00:28:48.460 21:29:42 -- nvmf/common.sh@297 -- # local -ga x722 00:28:48.460 21:29:42 -- nvmf/common.sh@298 -- # mlx=() 00:28:48.460 21:29:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:48.460 21:29:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.460 21:29:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:48.460 21:29:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.460 21:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:48.460 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:48.460 21:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:48.460 21:29:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:48.460 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:48.460 21:29:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.460 21:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.460 21:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.460 21:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:48.460 Found net devices under 0000:27:00.0: cvl_0_0 00:28:48.460 21:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.460 21:29:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:48.460 21:29:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.460 21:29:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.460 21:29:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:48.460 Found net devices under 0000:27:00.1: cvl_0_1 00:28:48.460 21:29:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.460 21:29:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:48.460 21:29:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:48.460 21:29:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.460 21:29:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.460 21:29:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.460 21:29:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:48.460 21:29:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.460 21:29:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.460 21:29:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:48.460 21:29:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.460 21:29:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.460 21:29:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:48.460 21:29:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:48.460 21:29:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.460 21:29:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.460 21:29:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.460 21:29:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.460 21:29:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:48.460 21:29:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.460 21:29:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.460 21:29:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.460 21:29:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:48.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.794 ms 00:28:48.460 00:28:48.460 --- 10.0.0.2 ping statistics --- 00:28:48.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.460 rtt min/avg/max/mdev = 0.794/0.794/0.794/0.000 ms 00:28:48.460 21:29:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.494 ms 00:28:48.460 00:28:48.460 --- 10.0.0.1 ping statistics --- 00:28:48.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.460 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:28:48.460 21:29:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.460 21:29:42 -- nvmf/common.sh@411 -- # return 0 00:28:48.460 21:29:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:48.460 21:29:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.460 21:29:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:48.460 21:29:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:48.461 21:29:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.461 21:29:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:48.461 21:29:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:48.461 21:29:42 -- host/fio.sh@14 -- # [[ y != y ]] 00:28:48.461 21:29:42 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:28:48.461 21:29:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:48.461 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:28:48.461 21:29:42 -- host/fio.sh@22 -- # nvmfpid=1598980 00:28:48.461 21:29:42 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.461 21:29:42 -- host/fio.sh@26 -- # waitforlisten 1598980 00:28:48.461 21:29:42 -- common/autotest_common.sh@817 -- # '[' -z 1598980 ']' 00:28:48.461 21:29:42 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:48.461 21:29:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.461 21:29:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:48.461 21:29:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.461 21:29:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:48.461 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:28:48.461 [2024-04-23 21:29:42.716130] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:28:48.461 [2024-04-23 21:29:42.716235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.721 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.721 [2024-04-23 21:29:42.841881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.721 [2024-04-23 21:29:42.940284] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.721 [2024-04-23 21:29:42.940322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.721 [2024-04-23 21:29:42.940334] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.721 [2024-04-23 21:29:42.940344] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.721 [2024-04-23 21:29:42.940351] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.721 [2024-04-23 21:29:42.940426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.721 [2024-04-23 21:29:42.940529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.721 [2024-04-23 21:29:42.940625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.721 [2024-04-23 21:29:42.940649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.288 21:29:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:49.288 21:29:43 -- common/autotest_common.sh@850 -- # return 0 00:28:49.288 21:29:43 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 [2024-04-23 21:29:43.417053] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:28:49.288 21:29:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 21:29:43 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 Malloc1 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 [2024-04-23 21:29:43.513102] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.288 21:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:49.288 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:28:49.288 21:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:49.288 21:29:43 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:28:49.288 21:29:43 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:49.288 21:29:43 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:49.288 21:29:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:49.288 21:29:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:49.288 21:29:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:49.288 21:29:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.288 21:29:43 -- common/autotest_common.sh@1327 -- # shift 00:28:49.288 21:29:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:49.288 21:29:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:49.288 21:29:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:49.288 21:29:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:49.288 21:29:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:49.288 21:29:43 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:49.288 21:29:43 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:49.288 21:29:43 -- common/autotest_common.sh@1333 -- # break 00:28:49.288 21:29:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:49.288 21:29:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:49.871 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:49.871 fio-3.35 00:28:49.871 Starting 1 thread 00:28:49.871 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.404 00:28:52.404 test: (groupid=0, jobs=1): err= 0: pid=1599452: Tue Apr 23 21:29:46 2024 00:28:52.404 read: IOPS=12.2k, BW=47.8MiB/s (50.2MB/s)(95.9MiB/2005msec) 00:28:52.404 slat (nsec): min=1563, max=97479, avg=1787.54, stdev=882.76 00:28:52.404 clat (usec): min=4165, max=10032, avg=5794.81, stdev=411.08 00:28:52.404 lat (usec): min=4167, max=10033, avg=5796.60, stdev=411.06 00:28:52.404 clat percentiles (usec): 00:28:52.404 | 1.00th=[ 4883], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:28:52.404 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:28:52.404 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:28:52.404 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 8291], 99.95th=[ 8848], 00:28:52.404 | 99.99th=[ 9896] 00:28:52.404 bw ( KiB/s): min=47736, max=49768, per=100.00%, avg=48992.00, stdev=907.40, samples=4 00:28:52.404 iops : min=11934, max=12442, avg=12248.00, stdev=226.85, samples=4 00:28:52.404 write: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(95.6MiB/2005msec); 0 zone resets 00:28:52.404 slat (nsec): min=1613, max=117510, avg=1902.77, stdev=834.21 00:28:52.404 clat (usec): min=1251, max=8841, avg=4627.67, stdev=362.69 00:28:52.404 lat (usec): min=1261, max=8843, avg=4629.57, stdev=362.68 00:28:52.404 clat percentiles (usec): 00:28:52.404 | 1.00th=[ 3818], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:28:52.404 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:28:52.404 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:28:52.404 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[ 8160], 00:28:52.404 | 99.99th=[ 8848] 00:28:52.404 bw ( KiB/s): min=48424, max=49456, per=99.98%, avg=48838.00, stdev=446.92, samples=4 00:28:52.404 iops : min=12106, max=12364, avg=12209.50, stdev=111.73, samples=4 00:28:52.404 lat (msec) : 2=0.01%, 4=1.40%, 10=98.60%, 20=0.01% 00:28:52.404 cpu : usr=64.32%, sys=29.64%, ctx=91, majf=0, minf=1528 00:28:52.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:52.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:52.404 issued rwts: total=24558,24484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:52.404 00:28:52.404 Run status group 0 (all jobs): 00:28:52.404 READ: bw=47.8MiB/s (50.2MB/s), 47.8MiB/s-47.8MiB/s (50.2MB/s-50.2MB/s), io=95.9MiB (101MB), run=2005-2005msec 00:28:52.404 WRITE: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=95.6MiB (100MB), run=2005-2005msec 00:28:52.404 ----------------------------------------------------- 00:28:52.404 Suppressions used: 00:28:52.404 count bytes template 00:28:52.404 1 57 /usr/src/fio/parse.c 00:28:52.404 1 8 libtcmalloc_minimal.so 00:28:52.404 ----------------------------------------------------- 00:28:52.404 00:28:52.404 21:29:46 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.404 21:29:46 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.404 21:29:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:52.404 21:29:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.404 21:29:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:52.404 21:29:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.404 21:29:46 -- common/autotest_common.sh@1327 -- # shift 00:28:52.404 21:29:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:52.404 21:29:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.404 21:29:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:52.404 21:29:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:52.404 21:29:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:52.404 21:29:46 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:52.404 21:29:46 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:52.404 21:29:46 -- common/autotest_common.sh@1333 -- # break 00:28:52.404 21:29:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:52.404 21:29:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:52.983 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:52.983 fio-3.35 00:28:52.983 Starting 1 thread 00:28:52.983 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.514 00:28:55.514 test: (groupid=0, jobs=1): err= 0: pid=1600182: Tue Apr 23 21:29:49 2024 00:28:55.514 read: IOPS=8774, BW=137MiB/s (144MB/s)(276MiB/2016msec) 00:28:55.514 slat (usec): min=2, max=141, avg= 3.69, stdev= 1.75 00:28:55.514 clat (usec): min=2680, max=22037, avg=8905.89, stdev=2629.28 00:28:55.514 lat (usec): min=2682, max=22042, avg=8909.58, stdev=2629.93 00:28:55.514 clat percentiles (usec): 00:28:55.514 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6521], 00:28:55.514 | 30.00th=[ 7373], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:28:55.514 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12518], 95.00th=[13829], 00:28:55.514 | 99.00th=[15401], 99.50th=[16188], 99.90th=[17695], 99.95th=[18220], 00:28:55.514 | 99.99th=[19268] 00:28:55.514 bw ( KiB/s): min=59872, max=80704, per=49.26%, avg=69160.00, stdev=10786.62, samples=4 00:28:55.514 iops : min= 3742, max= 5044, avg=4322.50, stdev=674.16, samples=4 00:28:55.514 write: IOPS=4966, BW=77.6MiB/s (81.4MB/s)(141MiB/1822msec); 0 zone resets 00:28:55.514 slat (usec): min=28, max=205, avg=39.21, stdev=10.72 00:28:55.514 clat (usec): min=3910, max=18381, avg=9967.25, stdev=2452.39 00:28:55.514 lat (usec): min=3939, max=18415, avg=10006.47, stdev=2459.63 00:28:55.514 clat percentiles (usec): 00:28:55.514 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7701], 00:28:55.514 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10421], 00:28:55.514 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13304], 95.00th=[14091], 00:28:55.514 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17957], 99.95th=[18220], 00:28:55.514 | 99.99th=[18482] 00:28:55.514 bw ( KiB/s): min=62432, max=83680, per=90.44%, avg=71864.00, stdev=11047.85, samples=4 00:28:55.514 iops : min= 3902, max= 5230, avg=4491.50, stdev=690.49, samples=4 00:28:55.514 lat (msec) : 4=0.58%, 10=63.36%, 20=36.05%, 50=0.01% 00:28:55.514 cpu : usr=84.71%, sys=13.95%, ctx=14, majf=0, minf=2287 00:28:55.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:55.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:55.514 issued rwts: total=17690,9049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:55.514 00:28:55.514 Run status group 0 (all jobs): 00:28:55.514 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=276MiB (290MB), run=2016-2016msec 00:28:55.514 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=141MiB (148MB), run=1822-1822msec 00:28:55.514 ----------------------------------------------------- 00:28:55.514 Suppressions used: 00:28:55.514 count bytes template 00:28:55.514 1 57 /usr/src/fio/parse.c 00:28:55.514 66 6336 /usr/src/fio/iolog.c 00:28:55.514 1 8 libtcmalloc_minimal.so 00:28:55.514 ----------------------------------------------------- 00:28:55.514 00:28:55.514 21:29:49 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.514 21:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.514 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:28:55.514 21:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.514 21:29:49 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:28:55.514 21:29:49 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:28:55.514 21:29:49 -- host/fio.sh@49 -- # get_nvme_bdfs 00:28:55.514 21:29:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:55.514 21:29:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:55.514 21:29:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:55.514 21:29:49 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:55.514 21:29:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:55.772 21:29:49 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:28:55.772 21:29:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:28:55.772 21:29:49 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 -i 10.0.0.2 00:28:55.772 21:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.772 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.030 Nvme0n1 00:28:56.030 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.030 21:29:50 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:56.030 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.030 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.597 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.597 21:29:50 -- host/fio.sh@51 -- # ls_guid=cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd 00:28:56.597 21:29:50 -- host/fio.sh@52 -- # get_lvs_free_mb cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd 00:28:56.597 21:29:50 -- common/autotest_common.sh@1350 -- # local lvs_uuid=cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd 00:28:56.597 21:29:50 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:56.597 21:29:50 -- common/autotest_common.sh@1352 -- # local fc 00:28:56.597 21:29:50 -- common/autotest_common.sh@1353 -- # local cs 00:28:56.597 21:29:50 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:56.597 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.597 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.597 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.597 21:29:50 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:56.597 { 00:28:56.597 "uuid": "cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd", 00:28:56.597 "name": "lvs_0", 00:28:56.597 "base_bdev": "Nvme0n1", 00:28:56.597 "total_data_clusters": 893, 00:28:56.597 "free_clusters": 893, 00:28:56.597 "block_size": 512, 00:28:56.597 "cluster_size": 1073741824 00:28:56.597 } 00:28:56.597 ]' 00:28:56.597 21:29:50 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd") .free_clusters' 00:28:56.597 21:29:50 -- common/autotest_common.sh@1355 -- # fc=893 00:28:56.597 21:29:50 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd") .cluster_size' 00:28:56.857 21:29:50 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:28:56.857 21:29:50 -- common/autotest_common.sh@1359 -- # free_mb=914432 00:28:56.857 21:29:50 -- common/autotest_common.sh@1360 -- # echo 914432 00:28:56.857 914432 00:28:56.857 21:29:50 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 914432 00:28:56.857 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.857 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.857 77f1c8ea-fa7b-4a45-be98-3372c6e59069 00:28:56.857 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.857 21:29:50 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:56.857 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.857 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.857 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.857 21:29:50 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:56.857 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.857 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.857 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.857 21:29:50 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:56.857 21:29:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:56.857 21:29:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.857 21:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:56.857 21:29:50 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:56.857 21:29:50 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:56.857 21:29:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:56.857 21:29:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:56.857 21:29:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:56.857 21:29:50 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:56.857 21:29:50 -- common/autotest_common.sh@1327 -- # shift 00:28:56.857 21:29:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:56.857 21:29:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:56.857 21:29:50 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:28:56.857 21:29:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:56.857 21:29:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:56.857 21:29:50 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:56.857 21:29:50 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:56.857 21:29:50 -- common/autotest_common.sh@1333 -- # break 00:28:56.857 21:29:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:56.857 21:29:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:57.118 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:57.118 fio-3.35 00:28:57.118 Starting 1 thread 00:28:57.376 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.916 00:28:59.916 test: (groupid=0, jobs=1): err= 0: pid=1601219: Tue Apr 23 21:29:53 2024 00:28:59.916 read: IOPS=9280, BW=36.3MiB/s (38.0MB/s)(72.7MiB/2006msec) 00:28:59.916 slat (nsec): min=1600, max=95829, avg=1952.12, stdev=951.52 00:28:59.916 clat (usec): min=3514, max=12575, avg=7623.12, stdev=618.61 00:28:59.916 lat (usec): min=3526, max=12577, avg=7625.07, stdev=618.55 00:28:59.916 clat percentiles (usec): 00:28:59.916 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:28:59.916 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7767], 00:28:59.916 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:28:59.916 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[10945], 99.95th=[11731], 00:28:59.916 | 99.99th=[12518] 00:28:59.916 bw ( KiB/s): min=35728, max=37928, per=99.93%, avg=37096.00, stdev=956.82, samples=4 00:28:59.916 iops : min= 8932, max= 9482, avg=9274.00, stdev=239.20, samples=4 00:28:59.916 write: IOPS=9286, BW=36.3MiB/s (38.0MB/s)(72.8MiB/2006msec); 0 zone resets 00:28:59.916 slat (nsec): min=1638, max=81556, avg=2069.50, stdev=691.20 00:28:59.916 clat (usec): min=1539, max=11632, avg=6062.76, stdev=541.94 00:28:59.916 lat (usec): min=1546, max=11635, avg=6064.83, stdev=541.92 00:28:59.916 clat percentiles (usec): 00:28:59.916 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5669], 00:28:59.916 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:28:59.916 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6915], 00:28:59.916 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 9241], 99.95th=[10028], 00:28:59.916 | 99.99th=[11600] 00:28:59.916 bw ( KiB/s): min=36536, max=37576, per=99.99%, avg=37140.00, stdev=435.96, samples=4 00:28:59.916 iops : min= 9134, max= 9394, avg=9285.00, stdev=108.99, samples=4 00:28:59.916 lat (msec) : 2=0.01%, 4=0.10%, 10=99.74%, 20=0.14% 00:28:59.916 cpu : usr=58.05%, sys=36.41%, ctx=92, majf=0, minf=1524 00:28:59.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:59.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:59.916 issued rwts: total=18617,18628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:59.916 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:59.916 00:28:59.916 Run status group 0 (all jobs): 00:28:59.916 READ: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.3MB), run=2006-2006msec 00:28:59.916 WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.8MiB (76.3MB), run=2006-2006msec 00:28:59.916 ----------------------------------------------------- 00:28:59.916 Suppressions used: 00:28:59.916 count bytes template 00:28:59.916 1 58 /usr/src/fio/parse.c 00:28:59.916 1 8 libtcmalloc_minimal.so 00:28:59.917 ----------------------------------------------------- 00:28:59.917 00:28:59.917 21:29:54 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:59.917 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.917 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.917 21:29:54 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:59.917 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.917 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.917 21:29:54 -- host/fio.sh@62 -- # ls_nested_guid=832a016a-cf04-4b1c-b7cb-0ed2bb285bc1 00:28:59.917 21:29:54 -- host/fio.sh@63 -- # get_lvs_free_mb 832a016a-cf04-4b1c-b7cb-0ed2bb285bc1 00:28:59.917 21:29:54 -- common/autotest_common.sh@1350 -- # local lvs_uuid=832a016a-cf04-4b1c-b7cb-0ed2bb285bc1 00:28:59.917 21:29:54 -- common/autotest_common.sh@1351 -- # local lvs_info 00:28:59.917 21:29:54 -- common/autotest_common.sh@1352 -- # local fc 00:28:59.917 21:29:54 -- common/autotest_common.sh@1353 -- # local cs 00:28:59.917 21:29:54 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:28:59.917 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.917 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:28:59.917 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.917 21:29:54 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:28:59.917 { 00:28:59.917 "uuid": "cb6e9bee-0ac8-48a3-95ba-3bba8fb532fd", 00:28:59.917 "name": "lvs_0", 00:28:59.917 "base_bdev": "Nvme0n1", 00:28:59.917 "total_data_clusters": 893, 00:28:59.917 "free_clusters": 0, 00:28:59.917 "block_size": 512, 00:28:59.917 "cluster_size": 1073741824 00:28:59.917 }, 00:28:59.917 { 00:28:59.917 "uuid": "832a016a-cf04-4b1c-b7cb-0ed2bb285bc1", 00:28:59.917 "name": "lvs_n_0", 00:28:59.917 "base_bdev": "77f1c8ea-fa7b-4a45-be98-3372c6e59069", 00:28:59.917 "total_data_clusters": 228384, 00:28:59.917 "free_clusters": 228384, 00:28:59.917 "block_size": 512, 00:28:59.917 "cluster_size": 4194304 00:28:59.917 } 00:28:59.917 ]' 00:28:59.917 21:29:54 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="832a016a-cf04-4b1c-b7cb-0ed2bb285bc1") .free_clusters' 00:28:59.917 21:29:54 -- common/autotest_common.sh@1355 -- # fc=228384 00:28:59.917 21:29:54 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="832a016a-cf04-4b1c-b7cb-0ed2bb285bc1") .cluster_size' 00:28:59.917 21:29:54 -- common/autotest_common.sh@1356 -- # cs=4194304 00:28:59.917 21:29:54 -- common/autotest_common.sh@1359 -- # free_mb=913536 00:28:59.917 21:29:54 -- common/autotest_common.sh@1360 -- # echo 913536 00:28:59.917 913536 00:28:59.917 21:29:54 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 913536 00:28:59.917 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.917 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:29:00.485 1d1bd4c9-cd81-4ee6-a01d-a7e699c7bc33 00:29:00.485 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.485 21:29:54 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:00.485 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.485 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:29:00.485 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.485 21:29:54 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:00.485 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.485 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:29:00.485 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.485 21:29:54 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:00.485 21:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.485 21:29:54 -- common/autotest_common.sh@10 -- # set +x 00:29:00.485 21:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.485 21:29:54 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.485 21:29:54 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:00.485 21:29:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:00.485 21:29:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.485 21:29:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:00.485 21:29:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.485 21:29:54 -- common/autotest_common.sh@1327 -- # shift 00:29:00.485 21:29:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:00.485 21:29:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.485 21:29:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:29:00.485 21:29:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:00.485 21:29:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:00.485 21:29:54 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:00.485 21:29:54 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:00.485 21:29:54 -- common/autotest_common.sh@1333 -- # break 00:29:00.485 21:29:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:00.485 21:29:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:01.069 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:01.069 fio-3.35 00:29:01.069 Starting 1 thread 00:29:01.069 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.599 00:29:03.599 test: (groupid=0, jobs=1): err= 0: pid=1602230: Tue Apr 23 21:29:57 2024 00:29:03.599 read: IOPS=8312, BW=32.5MiB/s (34.0MB/s)(65.1MiB/2006msec) 00:29:03.599 slat (nsec): min=1588, max=93945, avg=1865.81, stdev=1042.43 00:29:03.599 clat (usec): min=3212, max=13034, avg=8555.65, stdev=690.93 00:29:03.599 lat (usec): min=3229, max=13036, avg=8557.52, stdev=690.87 00:29:03.599 clat percentiles (usec): 00:29:03.599 | 1.00th=[ 6980], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7963], 00:29:03.599 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:29:03.599 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:29:03.599 | 99.00th=[10159], 99.50th=[10421], 99.90th=[11863], 99.95th=[12780], 00:29:03.599 | 99.99th=[13042] 00:29:03.599 bw ( KiB/s): min=32392, max=33800, per=99.92%, avg=33222.00, stdev=593.67, samples=4 00:29:03.599 iops : min= 8098, max= 8450, avg=8305.50, stdev=148.42, samples=4 00:29:03.599 write: IOPS=8316, BW=32.5MiB/s (34.1MB/s)(65.2MiB/2006msec); 0 zone resets 00:29:03.599 slat (nsec): min=1645, max=80886, avg=1979.18, stdev=715.44 00:29:03.599 clat (usec): min=2209, max=11862, avg=6790.64, stdev=606.82 00:29:03.599 lat (usec): min=2219, max=11864, avg=6792.62, stdev=606.78 00:29:03.599 clat percentiles (usec): 00:29:03.599 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6325], 00:29:03.599 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:29:03.599 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7701], 00:29:03.599 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9765], 99.95th=[11600], 00:29:03.599 | 99.99th=[11863] 00:29:03.599 bw ( KiB/s): min=33088, max=33344, per=99.92%, avg=33238.00, stdev=108.79, samples=4 00:29:03.599 iops : min= 8272, max= 8336, avg=8309.50, stdev=27.20, samples=4 00:29:03.599 lat (msec) : 4=0.12%, 10=99.01%, 20=0.88% 00:29:03.599 cpu : usr=62.74%, sys=33.07%, ctx=102, majf=0, minf=1526 00:29:03.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:03.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:03.599 issued rwts: total=16675,16682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:03.599 00:29:03.599 Run status group 0 (all jobs): 00:29:03.599 READ: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=65.1MiB (68.3MB), run=2006-2006msec 00:29:03.599 WRITE: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=65.2MiB (68.3MB), run=2006-2006msec 00:29:03.599 ----------------------------------------------------- 00:29:03.599 Suppressions used: 00:29:03.599 count bytes template 00:29:03.599 1 58 /usr/src/fio/parse.c 00:29:03.599 1 8 libtcmalloc_minimal.so 00:29:03.599 ----------------------------------------------------- 00:29:03.599 00:29:03.599 21:29:57 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:03.599 21:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.599 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:29:03.599 21:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:03.599 21:29:57 -- host/fio.sh@72 -- # sync 00:29:03.599 21:29:57 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:03.599 21:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.599 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:29:04.973 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.973 21:29:58 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:29:04.973 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.973 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:29:04.973 21:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.973 21:29:58 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:29:04.973 21:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.973 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:29:05.539 21:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.539 21:29:59 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:29:05.539 21:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.539 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:29:05.539 21:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.539 21:29:59 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:29:05.539 21:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.539 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:29:06.105 21:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.105 21:30:00 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:29:06.105 21:30:00 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:29:06.105 21:30:00 -- host/fio.sh@84 -- # nvmftestfini 00:29:06.105 21:30:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:06.105 21:30:00 -- nvmf/common.sh@117 -- # sync 00:29:06.105 21:30:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:06.105 21:30:00 -- nvmf/common.sh@120 -- # set +e 00:29:06.105 21:30:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:06.105 21:30:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:06.362 rmmod nvme_tcp 00:29:06.362 rmmod nvme_fabrics 00:29:06.362 rmmod nvme_keyring 00:29:06.362 21:30:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:06.362 21:30:00 -- nvmf/common.sh@124 -- # set -e 00:29:06.362 21:30:00 -- nvmf/common.sh@125 -- # return 0 00:29:06.362 21:30:00 -- nvmf/common.sh@478 -- # '[' -n 1598980 ']' 00:29:06.362 21:30:00 -- nvmf/common.sh@479 -- # killprocess 1598980 00:29:06.362 21:30:00 -- common/autotest_common.sh@936 -- # '[' -z 1598980 ']' 00:29:06.362 21:30:00 -- common/autotest_common.sh@940 -- # kill -0 1598980 00:29:06.362 21:30:00 -- common/autotest_common.sh@941 -- # uname 00:29:06.362 21:30:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:06.362 21:30:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1598980 00:29:06.362 21:30:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:06.362 21:30:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:06.362 21:30:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1598980' 00:29:06.362 killing process with pid 1598980 00:29:06.362 21:30:00 -- common/autotest_common.sh@955 -- # kill 1598980 00:29:06.362 21:30:00 -- common/autotest_common.sh@960 -- # wait 1598980 00:29:06.927 21:30:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:06.927 21:30:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:06.927 21:30:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:06.927 21:30:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.927 21:30:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:06.927 21:30:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.927 21:30:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.927 21:30:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.829 21:30:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.829 00:29:08.829 real 0m25.794s 00:29:08.829 user 2m24.719s 00:29:08.829 sys 0m8.755s 00:29:08.829 21:30:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:08.829 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:08.829 ************************************ 00:29:08.829 END TEST nvmf_fio_host 00:29:08.829 ************************************ 00:29:08.829 21:30:03 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:08.829 21:30:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:08.829 21:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.829 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:09.087 ************************************ 00:29:09.087 START TEST nvmf_failover 00:29:09.087 ************************************ 00:29:09.087 21:30:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:09.087 * Looking for test storage... 00:29:09.087 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:09.087 21:30:03 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.087 21:30:03 -- nvmf/common.sh@7 -- # uname -s 00:29:09.087 21:30:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.087 21:30:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.087 21:30:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.087 21:30:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.087 21:30:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.087 21:30:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.087 21:30:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.087 21:30:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.087 21:30:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.087 21:30:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.087 21:30:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:09.087 21:30:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:09.087 21:30:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.087 21:30:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.087 21:30:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:09.087 21:30:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.087 21:30:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:09.087 21:30:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.087 21:30:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.087 21:30:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.087 21:30:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.087 21:30:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.087 21:30:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.087 21:30:03 -- paths/export.sh@5 -- # export PATH 00:29:09.087 21:30:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.087 21:30:03 -- nvmf/common.sh@47 -- # : 0 00:29:09.087 21:30:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.087 21:30:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.087 21:30:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.087 21:30:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.087 21:30:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.087 21:30:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.087 21:30:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.087 21:30:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.087 21:30:03 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:09.087 21:30:03 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:09.087 21:30:03 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:29:09.087 21:30:03 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.087 21:30:03 -- host/failover.sh@18 -- # nvmftestinit 00:29:09.087 21:30:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:09.087 21:30:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.087 21:30:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:09.087 21:30:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:09.087 21:30:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:09.087 21:30:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.087 21:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.087 21:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.087 21:30:03 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:29:09.087 21:30:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:09.087 21:30:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:09.087 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.354 21:30:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:14.354 21:30:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.354 21:30:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.354 21:30:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.354 21:30:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.354 21:30:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.354 21:30:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.354 21:30:08 -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.354 21:30:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.354 21:30:08 -- nvmf/common.sh@296 -- # e810=() 00:29:14.354 21:30:08 -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.355 21:30:08 -- nvmf/common.sh@297 -- # x722=() 00:29:14.355 21:30:08 -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.355 21:30:08 -- nvmf/common.sh@298 -- # mlx=() 00:29:14.355 21:30:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.355 21:30:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.355 21:30:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.355 21:30:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.355 21:30:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:14.355 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:14.355 21:30:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.355 21:30:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:14.355 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:14.355 21:30:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.355 21:30:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.355 21:30:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.355 21:30:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:14.355 Found net devices under 0000:27:00.0: cvl_0_0 00:29:14.355 21:30:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.355 21:30:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.355 21:30:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.355 21:30:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.355 21:30:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:14.355 Found net devices under 0000:27:00.1: cvl_0_1 00:29:14.355 21:30:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.355 21:30:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:14.355 21:30:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:14.355 21:30:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.355 21:30:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.355 21:30:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.355 21:30:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.355 21:30:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.355 21:30:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.355 21:30:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.355 21:30:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.355 21:30:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.355 21:30:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.355 21:30:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.355 21:30:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.355 21:30:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.355 21:30:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.355 21:30:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.355 21:30:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.355 21:30:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.355 21:30:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.355 21:30:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.355 21:30:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:29:14.355 00:29:14.355 --- 10.0.0.2 ping statistics --- 00:29:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.355 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:29:14.355 21:30:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:29:14.355 00:29:14.355 --- 10.0.0.1 ping statistics --- 00:29:14.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.355 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:14.355 21:30:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.355 21:30:08 -- nvmf/common.sh@411 -- # return 0 00:29:14.355 21:30:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:14.355 21:30:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.355 21:30:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:14.355 21:30:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.355 21:30:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:14.355 21:30:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:14.355 21:30:08 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:14.355 21:30:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:14.355 21:30:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:14.355 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:29:14.355 21:30:08 -- nvmf/common.sh@470 -- # nvmfpid=1607640 00:29:14.355 21:30:08 -- nvmf/common.sh@471 -- # waitforlisten 1607640 00:29:14.355 21:30:08 -- common/autotest_common.sh@817 -- # '[' -z 1607640 ']' 00:29:14.355 21:30:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.355 21:30:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:14.355 21:30:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.355 21:30:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:14.355 21:30:08 -- common/autotest_common.sh@10 -- # set +x 00:29:14.355 21:30:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:14.355 [2024-04-23 21:30:08.558231] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:29:14.355 [2024-04-23 21:30:08.558335] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.614 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.614 [2024-04-23 21:30:08.682610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:14.614 [2024-04-23 21:30:08.776907] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.614 [2024-04-23 21:30:08.776943] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.614 [2024-04-23 21:30:08.776953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.614 [2024-04-23 21:30:08.776964] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.614 [2024-04-23 21:30:08.776971] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.614 [2024-04-23 21:30:08.777109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.614 [2024-04-23 21:30:08.777216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.614 [2024-04-23 21:30:08.777226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.180 21:30:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:15.180 21:30:09 -- common/autotest_common.sh@850 -- # return 0 00:29:15.180 21:30:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:15.180 21:30:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:15.180 21:30:09 -- common/autotest_common.sh@10 -- # set +x 00:29:15.180 21:30:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.180 21:30:09 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:15.180 [2024-04-23 21:30:09.397540] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.180 21:30:09 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:15.438 Malloc0 00:29:15.438 21:30:09 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.696 21:30:09 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.696 21:30:09 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.954 [2024-04-23 21:30:09.985802] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.954 21:30:10 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:15.954 [2024-04-23 21:30:10.138004] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:15.954 21:30:10 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:16.215 [2024-04-23 21:30:10.282079] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:16.215 21:30:10 -- host/failover.sh@31 -- # bdevperf_pid=1608093 00:29:16.215 21:30:10 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:16.215 21:30:10 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:16.215 21:30:10 -- host/failover.sh@34 -- # waitforlisten 1608093 /var/tmp/bdevperf.sock 00:29:16.215 21:30:10 -- common/autotest_common.sh@817 -- # '[' -z 1608093 ']' 00:29:16.215 21:30:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.215 21:30:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:16.215 21:30:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.215 21:30:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:16.215 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:29:17.154 21:30:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:17.154 21:30:11 -- common/autotest_common.sh@850 -- # return 0 00:29:17.154 21:30:11 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.154 NVMe0n1 00:29:17.154 21:30:11 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:17.504 00:29:17.504 21:30:11 -- host/failover.sh@39 -- # run_test_pid=1608204 00:29:17.504 21:30:11 -- host/failover.sh@41 -- # sleep 1 00:29:17.504 21:30:11 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:18.468 21:30:12 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.468 [2024-04-23 21:30:12.651288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.468 [2024-04-23 21:30:12.651636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 [2024-04-23 21:30:12.651902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:29:18.469 21:30:12 -- host/failover.sh@45 -- # sleep 3 00:29:21.755 21:30:15 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:22.014 00:29:22.014 21:30:16 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:22.014 [2024-04-23 21:30:16.185122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 [2024-04-23 21:30:16.185220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:29:22.014 21:30:16 -- host/failover.sh@50 -- # sleep 3 00:29:25.303 21:30:19 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.303 [2024-04-23 21:30:19.342321] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.303 21:30:19 -- host/failover.sh@55 -- # sleep 1 00:29:26.238 21:30:20 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:26.238 [2024-04-23 21:30:20.482834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.482995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 [2024-04-23 21:30:20.483317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:29:26.238 21:30:20 -- host/failover.sh@59 -- # wait 1608204 00:29:32.826 0 00:29:32.826 21:30:26 -- host/failover.sh@61 -- # killprocess 1608093 00:29:32.826 21:30:26 -- common/autotest_common.sh@936 -- # '[' -z 1608093 ']' 00:29:32.826 21:30:26 -- common/autotest_common.sh@940 -- # kill -0 1608093 00:29:32.826 21:30:26 -- common/autotest_common.sh@941 -- # uname 00:29:32.826 21:30:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:32.826 21:30:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1608093 00:29:32.826 21:30:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:32.826 21:30:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:32.826 21:30:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1608093' 00:29:32.826 killing process with pid 1608093 00:29:32.826 21:30:26 -- common/autotest_common.sh@955 -- # kill 1608093 00:29:32.826 21:30:26 -- common/autotest_common.sh@960 -- # wait 1608093 00:29:32.826 21:30:27 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:32.826 [2024-04-23 21:30:10.388051] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:29:32.826 [2024-04-23 21:30:10.388214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608093 ] 00:29:32.826 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.826 [2024-04-23 21:30:10.520853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.826 [2024-04-23 21:30:10.610889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.826 Running I/O for 15 seconds... 00:29:32.826 [2024-04-23 21:30:12.652384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.826 [2024-04-23 21:30:12.652774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.826 [2024-04-23 21:30:12.652796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.652986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.652993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.827 [2024-04-23 21:30:12.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.827 [2024-04-23 21:30:12.653548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.653833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.653985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.653994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.654116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.654135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.828 [2024-04-23 21:30:12.654152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.828 [2024-04-23 21:30:12.654282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.828 [2024-04-23 21:30:12.654292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.829 [2024-04-23 21:30:12.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:12.654843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.654853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:29:32.829 [2024-04-23 21:30:12.654867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.829 [2024-04-23 21:30:12.654877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.829 [2024-04-23 21:30:12.654887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103368 len:8 PRP1 0x0 PRP2 0x0 00:29:32.829 [2024-04-23 21:30:12.654898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.655027] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:29:32.829 [2024-04-23 21:30:12.655044] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:32.829 [2024-04-23 21:30:12.655079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.829 [2024-04-23 21:30:12.655091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.655103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.829 [2024-04-23 21:30:12.655113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.655122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.829 [2024-04-23 21:30:12.655131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.655140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.829 [2024-04-23 21:30:12.655148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:12.655166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.829 [2024-04-23 21:30:12.657723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.829 [2024-04-23 21:30:12.657755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:29:32.829 [2024-04-23 21:30:12.727623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.829 [2024-04-23 21:30:16.186850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.829 [2024-04-23 21:30:16.186901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.829 [2024-04-23 21:30:16.186931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.186952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.186960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.186970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.186978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.186988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.186996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.830 [2024-04-23 21:30:16.187170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.830 [2024-04-23 21:30:16.187503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.830 [2024-04-23 21:30:16.187510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.187984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.187993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.831 [2024-04-23 21:30:16.188168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.831 [2024-04-23 21:30:16.188175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.832 [2024-04-23 21:30:16.188553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63056 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63064 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63072 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63080 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63088 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63096 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63104 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63112 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63120 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63128 len:8 PRP1 0x0 PRP2 0x0 00:29:32.832 [2024-04-23 21:30:16.188878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.832 [2024-04-23 21:30:16.188885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.832 [2024-04-23 21:30:16.188891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.832 [2024-04-23 21:30:16.188898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63136 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.188905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.188913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.188920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.188926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63144 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.188934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.188941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.188947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.188954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63152 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.188962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.188970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.188976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.188983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63160 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.188991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.188998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63168 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63176 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63184 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63192 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63200 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63208 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63216 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63224 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63232 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63240 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63256 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63264 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62360 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62368 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62376 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62384 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62392 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62400 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.833 [2024-04-23 21:30:16.189579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.833 [2024-04-23 21:30:16.189586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62408 len:8 PRP1 0x0 PRP2 0x0 00:29:32.833 [2024-04-23 21:30:16.189594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.833 [2024-04-23 21:30:16.189714] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:29:32.834 [2024-04-23 21:30:16.189728] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:32.834 [2024-04-23 21:30:16.189755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.834 [2024-04-23 21:30:16.189764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:16.189774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.834 [2024-04-23 21:30:16.189783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:16.189791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.834 [2024-04-23 21:30:16.189800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:16.189809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.834 [2024-04-23 21:30:16.189816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:16.189825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.834 [2024-04-23 21:30:16.189860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:29:32.834 [2024-04-23 21:30:16.192338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.834 [2024-04-23 21:30:16.263983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.834 [2024-04-23 21:30:20.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.834 [2024-04-23 21:30:20.484927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.484946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.484963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.484988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.484996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.834 [2024-04-23 21:30:20.485102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.834 [2024-04-23 21:30:20.485110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.835 [2024-04-23 21:30:20.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.835 [2024-04-23 21:30:20.485683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.485991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.485999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.836 [2024-04-23 21:30:20.486018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.836 [2024-04-23 21:30:20.486035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.836 [2024-04-23 21:30:20.486053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.836 [2024-04-23 21:30:20.486162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.836 [2024-04-23 21:30:20.486181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.836 [2024-04-23 21:30:20.486197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.836 [2024-04-23 21:30:20.486217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:29:32.836 [2024-04-23 21:30:20.486386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89488 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89496 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89504 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89520 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89528 len:8 PRP1 0x0 PRP2 0x0 00:29:32.836 [2024-04-23 21:30:20.486616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.836 [2024-04-23 21:30:20.486624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.836 [2024-04-23 21:30:20.486633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.836 [2024-04-23 21:30:20.486641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89536 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89544 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89552 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89560 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89576 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.486975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.486981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.486987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88840 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88848 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88856 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88864 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88872 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88880 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88888 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88896 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.837 [2024-04-23 21:30:20.487294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.837 [2024-04-23 21:30:20.487300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88904 len:8 PRP1 0x0 PRP2 0x0 00:29:32.837 [2024-04-23 21:30:20.487307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.837 [2024-04-23 21:30:20.487315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88912 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88920 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88928 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88936 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88944 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88952 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88960 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88672 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88680 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.487736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.487743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88688 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.487751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.487759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88696 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88704 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88712 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88720 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88728 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88736 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88744 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88752 len:8 PRP1 0x0 PRP2 0x0 00:29:32.838 [2024-04-23 21:30:20.491699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.838 [2024-04-23 21:30:20.491707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.838 [2024-04-23 21:30:20.491712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.838 [2024-04-23 21:30:20.491719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88760 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88768 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88776 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88784 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88792 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88800 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88808 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88816 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88968 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.491964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.491977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.491984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88976 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.491993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88984 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88992 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89000 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89008 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89016 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89024 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89032 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89040 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89048 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89056 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.839 [2024-04-23 21:30:20.492288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.839 [2024-04-23 21:30:20.492294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.839 [2024-04-23 21:30:20.492301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89064 len:8 PRP1 0x0 PRP2 0x0 00:29:32.839 [2024-04-23 21:30:20.492309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89072 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89080 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89088 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89096 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89104 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89112 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89120 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89128 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89136 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89144 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89152 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89160 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89168 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89176 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89184 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89192 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89200 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89208 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89216 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89224 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89232 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89240 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.492973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.840 [2024-04-23 21:30:20.492980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89248 len:8 PRP1 0x0 PRP2 0x0 00:29:32.840 [2024-04-23 21:30:20.492987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.840 [2024-04-23 21:30:20.492995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.840 [2024-04-23 21:30:20.493001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89256 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89264 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89272 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89280 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89288 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89296 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89304 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89312 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89320 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89328 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89336 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89344 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89352 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89360 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89368 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89376 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89384 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.841 [2024-04-23 21:30:20.493655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.841 [2024-04-23 21:30:20.493662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:29:32.841 [2024-04-23 21:30:20.493669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.841 [2024-04-23 21:30:20.493677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88824 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88832 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:32.842 [2024-04-23 21:30:20.493829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:32.842 [2024-04-23 21:30:20.493836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:29:32.842 [2024-04-23 21:30:20.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.842 [2024-04-23 21:30:20.493966] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:29:32.842 [2024-04-23 21:30:20.493980] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:32.842 [2024-04-23 21:30:20.493990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.842 [2024-04-23 21:30:20.496595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.842 [2024-04-23 21:30:20.496624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:29:32.842 [2024-04-23 21:30:20.523475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:32.842 00:29:32.842 Latency(us) 00:29:32.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.842 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:32.842 Verification LBA range: start 0x0 length 0x4000 00:29:32.842 NVMe0n1 : 15.01 11501.29 44.93 567.43 0.00 10585.70 983.04 18901.96 00:29:32.842 =================================================================================================================== 00:29:32.842 Total : 11501.29 44.93 567.43 0.00 10585.70 983.04 18901.96 00:29:32.842 Received shutdown signal, test time was about 15.000000 seconds 00:29:32.842 00:29:32.842 Latency(us) 00:29:32.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.842 =================================================================================================================== 00:29:32.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.842 21:30:27 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:32.842 21:30:27 -- host/failover.sh@65 -- # count=3 00:29:32.842 21:30:27 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:32.842 21:30:27 -- host/failover.sh@73 -- # bdevperf_pid=1611193 00:29:32.842 21:30:27 -- host/failover.sh@75 -- # waitforlisten 1611193 /var/tmp/bdevperf.sock 00:29:32.842 21:30:27 -- common/autotest_common.sh@817 -- # '[' -z 1611193 ']' 00:29:32.842 21:30:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.842 21:30:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:32.842 21:30:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.842 21:30:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:32.842 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:29:32.842 21:30:27 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:33.783 21:30:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:33.783 21:30:27 -- common/autotest_common.sh@850 -- # return 0 00:29:33.783 21:30:27 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:33.783 [2024-04-23 21:30:27.970225] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:33.783 21:30:27 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:34.042 [2024-04-23 21:30:28.122294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:34.042 21:30:28 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.300 NVMe0n1 00:29:34.300 21:30:28 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:34.558 00:29:34.558 21:30:28 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.123 00:29:35.123 21:30:29 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.123 21:30:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:35.123 21:30:29 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.123 21:30:29 -- host/failover.sh@87 -- # sleep 3 00:29:38.409 21:30:32 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.409 21:30:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:38.409 21:30:32 -- host/failover.sh@90 -- # run_test_pid=1612320 00:29:38.409 21:30:32 -- host/failover.sh@92 -- # wait 1612320 00:29:38.409 21:30:32 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:39.344 0 00:29:39.602 21:30:33 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:39.602 [2024-04-23 21:30:27.095915] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:29:39.602 [2024-04-23 21:30:27.096038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611193 ] 00:29:39.602 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.602 [2024-04-23 21:30:27.210618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.602 [2024-04-23 21:30:27.302369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.602 [2024-04-23 21:30:29.368755] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:39.602 [2024-04-23 21:30:29.368819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.602 [2024-04-23 21:30:29.368834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.602 [2024-04-23 21:30:29.368847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.602 [2024-04-23 21:30:29.368855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.602 [2024-04-23 21:30:29.368864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.602 [2024-04-23 21:30:29.368872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.602 [2024-04-23 21:30:29.368881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:39.602 [2024-04-23 21:30:29.368889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.602 [2024-04-23 21:30:29.368897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:39.602 [2024-04-23 21:30:29.368940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:39.602 [2024-04-23 21:30:29.368963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:29:39.602 [2024-04-23 21:30:29.418413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:39.602 Running I/O for 1 seconds... 00:29:39.602 00:29:39.602 Latency(us) 00:29:39.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.602 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.602 Verification LBA range: start 0x0 length 0x4000 00:29:39.602 NVMe0n1 : 1.01 11709.82 45.74 0.00 0.00 10890.13 2259.27 9726.92 00:29:39.602 =================================================================================================================== 00:29:39.602 Total : 11709.82 45.74 0.00 0.00 10890.13 2259.27 9726.92 00:29:39.602 21:30:33 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.602 21:30:33 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:39.602 21:30:33 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:39.861 21:30:33 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:39.861 21:30:33 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:39.861 21:30:34 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.121 21:30:34 -- host/failover.sh@101 -- # sleep 3 00:29:43.408 21:30:37 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:43.408 21:30:37 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:43.408 21:30:37 -- host/failover.sh@108 -- # killprocess 1611193 00:29:43.408 21:30:37 -- common/autotest_common.sh@936 -- # '[' -z 1611193 ']' 00:29:43.408 21:30:37 -- common/autotest_common.sh@940 -- # kill -0 1611193 00:29:43.408 21:30:37 -- common/autotest_common.sh@941 -- # uname 00:29:43.408 21:30:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:43.408 21:30:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611193 00:29:43.408 21:30:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:43.408 21:30:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:43.408 21:30:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611193' 00:29:43.408 killing process with pid 1611193 00:29:43.408 21:30:37 -- common/autotest_common.sh@955 -- # kill 1611193 00:29:43.408 21:30:37 -- common/autotest_common.sh@960 -- # wait 1611193 00:29:43.666 21:30:37 -- host/failover.sh@110 -- # sync 00:29:43.666 21:30:37 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.666 21:30:37 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:43.666 21:30:37 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:43.666 21:30:37 -- host/failover.sh@116 -- # nvmftestfini 00:29:43.666 21:30:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:43.666 21:30:37 -- nvmf/common.sh@117 -- # sync 00:29:43.666 21:30:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.666 21:30:37 -- nvmf/common.sh@120 -- # set +e 00:29:43.666 21:30:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.666 21:30:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.666 rmmod nvme_tcp 00:29:43.666 rmmod nvme_fabrics 00:29:43.666 rmmod nvme_keyring 00:29:43.666 21:30:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.925 21:30:37 -- nvmf/common.sh@124 -- # set -e 00:29:43.925 21:30:37 -- nvmf/common.sh@125 -- # return 0 00:29:43.925 21:30:37 -- nvmf/common.sh@478 -- # '[' -n 1607640 ']' 00:29:43.925 21:30:37 -- nvmf/common.sh@479 -- # killprocess 1607640 00:29:43.925 21:30:37 -- common/autotest_common.sh@936 -- # '[' -z 1607640 ']' 00:29:43.925 21:30:37 -- common/autotest_common.sh@940 -- # kill -0 1607640 00:29:43.925 21:30:37 -- common/autotest_common.sh@941 -- # uname 00:29:43.925 21:30:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:43.925 21:30:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1607640 00:29:43.925 21:30:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:43.925 21:30:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:43.925 21:30:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1607640' 00:29:43.925 killing process with pid 1607640 00:29:43.925 21:30:37 -- common/autotest_common.sh@955 -- # kill 1607640 00:29:43.925 21:30:37 -- common/autotest_common.sh@960 -- # wait 1607640 00:29:44.496 21:30:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:44.496 21:30:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:44.496 21:30:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:44.496 21:30:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.496 21:30:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.496 21:30:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.496 21:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.496 21:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.405 21:30:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:46.405 00:29:46.405 real 0m37.381s 00:29:46.405 user 1m59.495s 00:29:46.405 sys 0m6.710s 00:29:46.405 21:30:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:46.405 21:30:40 -- common/autotest_common.sh@10 -- # set +x 00:29:46.405 ************************************ 00:29:46.405 END TEST nvmf_failover 00:29:46.405 ************************************ 00:29:46.405 21:30:40 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:46.405 21:30:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:46.405 21:30:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:46.405 21:30:40 -- common/autotest_common.sh@10 -- # set +x 00:29:46.667 ************************************ 00:29:46.667 START TEST nvmf_discovery 00:29:46.667 ************************************ 00:29:46.667 21:30:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:46.667 * Looking for test storage... 00:29:46.667 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:29:46.667 21:30:40 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.667 21:30:40 -- nvmf/common.sh@7 -- # uname -s 00:29:46.667 21:30:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.667 21:30:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.667 21:30:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.667 21:30:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.667 21:30:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.667 21:30:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.667 21:30:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.667 21:30:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.667 21:30:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.667 21:30:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.667 21:30:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:46.667 21:30:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:29:46.667 21:30:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.667 21:30:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.667 21:30:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:46.667 21:30:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.667 21:30:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:46.667 21:30:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.667 21:30:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.667 21:30:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.667 21:30:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.667 21:30:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.668 21:30:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.668 21:30:40 -- paths/export.sh@5 -- # export PATH 00:29:46.668 21:30:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.668 21:30:40 -- nvmf/common.sh@47 -- # : 0 00:29:46.668 21:30:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:46.668 21:30:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:46.668 21:30:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.668 21:30:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.668 21:30:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.668 21:30:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:46.668 21:30:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:46.668 21:30:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:46.668 21:30:40 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:46.668 21:30:40 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:46.668 21:30:40 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:46.668 21:30:40 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:46.668 21:30:40 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:46.668 21:30:40 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:46.668 21:30:40 -- host/discovery.sh@25 -- # nvmftestinit 00:29:46.668 21:30:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:46.668 21:30:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.668 21:30:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:46.668 21:30:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:46.668 21:30:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:46.668 21:30:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.668 21:30:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.668 21:30:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.668 21:30:40 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:29:46.668 21:30:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:46.668 21:30:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:46.668 21:30:40 -- common/autotest_common.sh@10 -- # set +x 00:29:53.251 21:30:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:53.251 21:30:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.251 21:30:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.251 21:30:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.251 21:30:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.251 21:30:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.251 21:30:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.251 21:30:46 -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.251 21:30:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.251 21:30:46 -- nvmf/common.sh@296 -- # e810=() 00:29:53.251 21:30:46 -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.251 21:30:46 -- nvmf/common.sh@297 -- # x722=() 00:29:53.251 21:30:46 -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.251 21:30:46 -- nvmf/common.sh@298 -- # mlx=() 00:29:53.251 21:30:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.251 21:30:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.251 21:30:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.251 21:30:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.251 21:30:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.251 21:30:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:29:53.251 Found 0000:27:00.0 (0x8086 - 0x159b) 00:29:53.251 21:30:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.251 21:30:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:29:53.251 Found 0000:27:00.1 (0x8086 - 0x159b) 00:29:53.251 21:30:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.251 21:30:46 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:53.251 21:30:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.251 21:30:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.251 21:30:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:53.251 21:30:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.251 21:30:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:29:53.251 Found net devices under 0000:27:00.0: cvl_0_0 00:29:53.251 21:30:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.251 21:30:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.251 21:30:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.251 21:30:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:53.251 21:30:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.251 21:30:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:29:53.251 Found net devices under 0000:27:00.1: cvl_0_1 00:29:53.251 21:30:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.251 21:30:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:53.252 21:30:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:53.252 21:30:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:53.252 21:30:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:53.252 21:30:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:53.252 21:30:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.252 21:30:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.252 21:30:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.252 21:30:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.252 21:30:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.252 21:30:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.252 21:30:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.252 21:30:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.252 21:30:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.252 21:30:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.252 21:30:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.252 21:30:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.252 21:30:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.252 21:30:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.252 21:30:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.252 21:30:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.252 21:30:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.252 21:30:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.252 21:30:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.252 21:30:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:29:53.252 00:29:53.252 --- 10.0.0.2 ping statistics --- 00:29:53.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.252 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:29:53.252 21:30:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:29:53.252 00:29:53.252 --- 10.0.0.1 ping statistics --- 00:29:53.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.252 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:29:53.252 21:30:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.252 21:30:46 -- nvmf/common.sh@411 -- # return 0 00:29:53.252 21:30:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:53.252 21:30:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.252 21:30:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:53.252 21:30:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:53.252 21:30:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.252 21:30:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:53.252 21:30:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:53.252 21:30:46 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:53.252 21:30:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:53.252 21:30:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:53.252 21:30:46 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 21:30:46 -- nvmf/common.sh@470 -- # nvmfpid=1617394 00:29:53.252 21:30:46 -- nvmf/common.sh@471 -- # waitforlisten 1617394 00:29:53.252 21:30:46 -- common/autotest_common.sh@817 -- # '[' -z 1617394 ']' 00:29:53.252 21:30:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.252 21:30:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:53.252 21:30:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.252 21:30:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:53.252 21:30:46 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 21:30:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:53.252 [2024-04-23 21:30:46.742267] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:29:53.252 [2024-04-23 21:30:46.742375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.252 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.252 [2024-04-23 21:30:46.866072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.252 [2024-04-23 21:30:46.965787] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.252 [2024-04-23 21:30:46.965819] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.252 [2024-04-23 21:30:46.965829] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.252 [2024-04-23 21:30:46.965838] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.252 [2024-04-23 21:30:46.965845] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.252 [2024-04-23 21:30:46.965874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.252 21:30:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:53.252 21:30:47 -- common/autotest_common.sh@850 -- # return 0 00:29:53.252 21:30:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:53.252 21:30:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 21:30:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.252 21:30:47 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.252 21:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 [2024-04-23 21:30:47.480686] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.252 21:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.252 21:30:47 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:53.252 21:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 [2024-04-23 21:30:47.488839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:53.252 21:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.252 21:30:47 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:53.252 21:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 null0 00:29:53.252 21:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.252 21:30:47 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:53.252 21:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 null1 00:29:53.252 21:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.252 21:30:47 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:53.252 21:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.252 21:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:53.252 21:30:47 -- host/discovery.sh@45 -- # hostpid=1617698 00:29:53.252 21:30:47 -- host/discovery.sh@46 -- # waitforlisten 1617698 /tmp/host.sock 00:29:53.252 21:30:47 -- common/autotest_common.sh@817 -- # '[' -z 1617698 ']' 00:29:53.252 21:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:53.252 21:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:53.252 21:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:53.252 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:53.252 21:30:47 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:53.252 21:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:53.252 21:30:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.512 [2024-04-23 21:30:47.592448] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:29:53.512 [2024-04-23 21:30:47.592554] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617698 ] 00:29:53.512 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.512 [2024-04-23 21:30:47.703049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.772 [2024-04-23 21:30:47.792990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.343 21:30:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:54.343 21:30:48 -- common/autotest_common.sh@850 -- # return 0 00:29:54.343 21:30:48 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.343 21:30:48 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@72 -- # notify_id=0 00:29:54.343 21:30:48 -- host/discovery.sh@83 -- # get_subsystem_names 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # sort 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@84 -- # get_bdev_list 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # sort 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@87 -- # get_subsystem_names 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # sort 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@88 -- # get_bdev_list 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # sort 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@91 -- # get_subsystem_names 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # xargs 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # sort 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@92 -- # get_bdev_list 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # sort 00:29:54.343 21:30:48 -- host/discovery.sh@55 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:54.343 21:30:48 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.343 [2024-04-23 21:30:48.597117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.343 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.343 21:30:48 -- host/discovery.sh@97 -- # get_subsystem_names 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # sort 00:29:54.343 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.343 21:30:48 -- host/discovery.sh@59 -- # xargs 00:29:54.343 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.603 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.603 21:30:48 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:54.603 21:30:48 -- host/discovery.sh@98 -- # get_bdev_list 00:29:54.603 21:30:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.603 21:30:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.603 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.603 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.603 21:30:48 -- host/discovery.sh@55 -- # sort 00:29:54.603 21:30:48 -- host/discovery.sh@55 -- # xargs 00:29:54.603 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.603 21:30:48 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:54.603 21:30:48 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:54.603 21:30:48 -- host/discovery.sh@79 -- # expected_count=0 00:29:54.603 21:30:48 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.603 21:30:48 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.603 21:30:48 -- common/autotest_common.sh@901 -- # local max=10 00:29:54.603 21:30:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:54.603 21:30:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:54.603 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.603 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.603 21:30:48 -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.603 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.603 21:30:48 -- host/discovery.sh@74 -- # notification_count=0 00:29:54.603 21:30:48 -- host/discovery.sh@75 -- # notify_id=0 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:54.603 21:30:48 -- common/autotest_common.sh@904 -- # return 0 00:29:54.603 21:30:48 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:54.603 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.603 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.603 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.603 21:30:48 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.603 21:30:48 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.603 21:30:48 -- common/autotest_common.sh@901 -- # local max=10 00:29:54.603 21:30:48 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:54.603 21:30:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.603 21:30:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.603 21:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:54.603 21:30:48 -- common/autotest_common.sh@10 -- # set +x 00:29:54.603 21:30:48 -- host/discovery.sh@59 -- # xargs 00:29:54.603 21:30:48 -- host/discovery.sh@59 -- # sort 00:29:54.603 21:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:54.603 21:30:48 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:29:54.603 21:30:48 -- common/autotest_common.sh@906 -- # sleep 1 00:29:55.170 [2024-04-23 21:30:49.374865] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:55.170 [2024-04-23 21:30:49.374896] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:55.170 [2024-04-23 21:30:49.374924] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.429 [2024-04-23 21:30:49.464973] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:55.429 [2024-04-23 21:30:49.691562] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:55.429 [2024-04-23 21:30:49.691591] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:55.690 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:55.690 21:30:49 -- host/discovery.sh@59 -- # xargs 00:29:55.690 21:30:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:55.690 21:30:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- host/discovery.sh@59 -- # sort 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@904 -- # return 0 00:29:55.690 21:30:49 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.690 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # xargs 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # sort 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@904 -- # return 0 00:29:55.690 21:30:49 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.690 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:55.690 21:30:49 -- host/discovery.sh@63 -- # xargs 00:29:55.690 21:30:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- host/discovery.sh@63 -- # sort -n 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:29:55.690 21:30:49 -- common/autotest_common.sh@904 -- # return 0 00:29:55.690 21:30:49 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:55.690 21:30:49 -- host/discovery.sh@79 -- # expected_count=1 00:29:55.690 21:30:49 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.690 21:30:49 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.690 21:30:49 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.690 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:55.690 21:30:49 -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.690 21:30:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.690 21:30:49 -- host/discovery.sh@74 -- # notification_count=1 00:29:55.690 21:30:49 -- host/discovery.sh@75 -- # notify_id=1 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@904 -- # return 0 00:29:55.690 21:30:49 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.690 21:30:49 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.690 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:55.690 21:30:49 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.690 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # sort 00:29:55.690 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 21:30:49 -- host/discovery.sh@55 -- # xargs 00:29:55.690 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.951 21:30:49 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:55.951 21:30:49 -- common/autotest_common.sh@904 -- # return 0 00:29:55.951 21:30:49 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:55.951 21:30:49 -- host/discovery.sh@79 -- # expected_count=1 00:29:55.951 21:30:49 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:55.951 21:30:49 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:55.951 21:30:49 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.951 21:30:49 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.951 21:30:49 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:55.951 21:30:49 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:55.951 21:30:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:55.951 21:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.951 21:30:49 -- host/discovery.sh@74 -- # jq '. | length' 00:29:55.951 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:29:55.951 21:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.951 21:30:50 -- host/discovery.sh@74 -- # notification_count=1 00:29:55.951 21:30:50 -- host/discovery.sh@75 -- # notify_id=2 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:55.951 21:30:50 -- common/autotest_common.sh@904 -- # return 0 00:29:55.951 21:30:50 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:55.951 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.951 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.951 [2024-04-23 21:30:50.013742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:55.951 [2024-04-23 21:30:50.014317] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:55.951 [2024-04-23 21:30:50.014371] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.951 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.951 21:30:50 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.951 21:30:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:55.951 21:30:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:55.951 21:30:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:55.951 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.951 21:30:50 -- host/discovery.sh@59 -- # sort 00:29:55.951 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.951 21:30:50 -- host/discovery.sh@59 -- # xargs 00:29:55.951 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.951 21:30:50 -- common/autotest_common.sh@904 -- # return 0 00:29:55.951 21:30:50 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.951 21:30:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:55.951 21:30:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.951 21:30:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.951 21:30:50 -- host/discovery.sh@55 -- # sort 00:29:55.951 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.951 21:30:50 -- host/discovery.sh@55 -- # xargs 00:29:55.951 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.951 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:55.951 21:30:50 -- common/autotest_common.sh@904 -- # return 0 00:29:55.951 21:30:50 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@901 -- # local max=10 00:29:55.951 21:30:50 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:55.951 21:30:50 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:55.951 21:30:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:55.951 21:30:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:55.951 21:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.952 21:30:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.952 21:30:50 -- host/discovery.sh@63 -- # sort -n 00:29:55.952 21:30:50 -- host/discovery.sh@63 -- # xargs 00:29:55.952 21:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.952 [2024-04-23 21:30:50.143443] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:55.952 21:30:50 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:55.952 21:30:50 -- common/autotest_common.sh@906 -- # sleep 1 00:29:55.952 [2024-04-23 21:30:50.206132] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:55.952 [2024-04-23 21:30:50.206161] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:55.952 [2024-04-23 21:30:50.206171] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:56.893 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:56.893 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:56.893 21:30:51 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:56.893 21:30:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:56.893 21:30:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:56.893 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.893 21:30:51 -- host/discovery.sh@63 -- # sort -n 00:29:56.894 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.894 21:30:51 -- host/discovery.sh@63 -- # xargs 00:29:56.894 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.153 21:30:51 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:57.153 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.153 21:30:51 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:57.153 21:30:51 -- host/discovery.sh@79 -- # expected_count=0 00:29:57.153 21:30:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:57.153 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:57.153 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.153 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.153 21:30:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:57.153 21:30:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:57.153 21:30:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:57.153 21:30:51 -- host/discovery.sh@74 -- # jq '. | length' 00:29:57.153 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.153 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.153 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.154 21:30:51 -- host/discovery.sh@74 -- # notification_count=0 00:29:57.154 21:30:51 -- host/discovery.sh@75 -- # notify_id=2 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:57.154 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.154 21:30:51 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.154 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.154 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.154 [2024-04-23 21:30:51.230415] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:57.154 [2024-04-23 21:30:51.230448] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.154 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.154 21:30:51 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.154 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:57.154 [2024-04-23 21:30:51.239383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.154 [2024-04-23 21:30:51.239415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.154 [2024-04-23 21:30:51.239427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.154 [2024-04-23 21:30:51.239436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.154 [2024-04-23 21:30:51.239445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.154 [2024-04-23 21:30:51.239453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.154 [2024-04-23 21:30:51.239461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.154 [2024-04-23 21:30:51.239472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.154 [2024-04-23 21:30:51.239481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 21:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.154 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.154 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.154 21:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.154 21:30:51 -- host/discovery.sh@59 -- # sort 00:29:57.154 21:30:51 -- host/discovery.sh@59 -- # xargs 00:29:57.154 [2024-04-23 21:30:51.249369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.154 [2024-04-23 21:30:51.259381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.259952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.260473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.260486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.260497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.260512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 [2024-04-23 21:30:51.260533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.154 [2024-04-23 21:30:51.260541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.154 [2024-04-23 21:30:51.260551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.154 [2024-04-23 21:30:51.260567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.154 [2024-04-23 21:30:51.269427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.269857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.270381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.270392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.270402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.270416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 [2024-04-23 21:30:51.270434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.154 [2024-04-23 21:30:51.270443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.154 [2024-04-23 21:30:51.270452] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.154 [2024-04-23 21:30:51.270464] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.154 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.154 21:30:51 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.154 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:57.154 21:30:51 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:57.154 [2024-04-23 21:30:51.279468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.279748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.279982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.279993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.280003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.280017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 [2024-04-23 21:30:51.280029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.154 [2024-04-23 21:30:51.280036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.154 [2024-04-23 21:30:51.280045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.154 [2024-04-23 21:30:51.280057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.154 21:30:51 -- host/discovery.sh@55 -- # xargs 00:29:57.154 21:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.154 21:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.154 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.154 21:30:51 -- host/discovery.sh@55 -- # sort 00:29:57.154 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.154 [2024-04-23 21:30:51.289524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.290077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.290611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.290622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.290636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.290649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 [2024-04-23 21:30:51.290666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.154 [2024-04-23 21:30:51.290674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.154 [2024-04-23 21:30:51.290682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.154 [2024-04-23 21:30:51.290694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.154 [2024-04-23 21:30:51.299565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.299801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.300026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.300036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.300045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.300058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.154 [2024-04-23 21:30:51.300074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.154 [2024-04-23 21:30:51.300085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.154 [2024-04-23 21:30:51.300093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.154 [2024-04-23 21:30:51.300104] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.154 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.154 [2024-04-23 21:30:51.309606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:57.154 [2024-04-23 21:30:51.309897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.310267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-04-23 21:30:51.310277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:29:57.154 [2024-04-23 21:30:51.310285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:29:57.154 [2024-04-23 21:30:51.310297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:29:57.155 [2024-04-23 21:30:51.310313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:57.155 [2024-04-23 21:30:51.310320] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:57.155 [2024-04-23 21:30:51.310327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:57.155 [2024-04-23 21:30:51.310338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:57.155 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.155 21:30:51 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.155 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.155 [2024-04-23 21:30:51.319341] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:57.155 [2024-04-23 21:30:51.319367] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:29:57.155 21:30:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:57.155 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.155 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.155 21:30:51 -- host/discovery.sh@63 -- # xargs 00:29:57.155 21:30:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:57.155 21:30:51 -- host/discovery.sh@63 -- # sort -n 00:29:57.155 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:29:57.155 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.155 21:30:51 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:57.155 21:30:51 -- host/discovery.sh@79 -- # expected_count=0 00:29:57.155 21:30:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:57.155 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:57.155 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.155 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:57.155 21:30:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:57.155 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.155 21:30:51 -- host/discovery.sh@74 -- # jq '. | length' 00:29:57.155 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.155 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.155 21:30:51 -- host/discovery.sh@74 -- # notification_count=0 00:29:57.155 21:30:51 -- host/discovery.sh@75 -- # notify_id=2 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:57.155 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.155 21:30:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:57.155 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.155 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.155 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.155 21:30:51 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.155 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:57.155 21:30:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:29:57.155 21:30:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:57.155 21:30:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:57.155 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.155 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.155 21:30:51 -- host/discovery.sh@59 -- # sort 00:29:57.155 21:30:51 -- host/discovery.sh@59 -- # xargs 00:29:57.414 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.414 21:30:51 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:57.414 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.414 21:30:51 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:57.414 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:57.414 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.414 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.414 21:30:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:57.414 21:30:51 -- common/autotest_common.sh@903 -- # get_bdev_list 00:29:57.414 21:30:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.414 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.414 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.414 21:30:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:57.414 21:30:51 -- host/discovery.sh@55 -- # xargs 00:29:57.414 21:30:51 -- host/discovery.sh@55 -- # sort 00:29:57.415 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.415 21:30:51 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:29:57.415 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.415 21:30:51 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:57.415 21:30:51 -- host/discovery.sh@79 -- # expected_count=2 00:29:57.415 21:30:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:57.415 21:30:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:57.415 21:30:51 -- common/autotest_common.sh@901 -- # local max=10 00:29:57.415 21:30:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:29:57.415 21:30:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:57.415 21:30:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:29:57.415 21:30:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:57.415 21:30:51 -- host/discovery.sh@74 -- # jq '. | length' 00:29:57.415 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.415 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:57.415 21:30:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:57.415 21:30:51 -- host/discovery.sh@74 -- # notification_count=2 00:29:57.415 21:30:51 -- host/discovery.sh@75 -- # notify_id=4 00:29:57.415 21:30:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:29:57.415 21:30:51 -- common/autotest_common.sh@904 -- # return 0 00:29:57.415 21:30:51 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:57.415 21:30:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:57.415 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:29:58.415 [2024-04-23 21:30:52.586838] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:58.415 [2024-04-23 21:30:52.586865] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:58.415 [2024-04-23 21:30:52.586884] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:58.699 [2024-04-23 21:30:52.676940] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:58.699 [2024-04-23 21:30:52.946605] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:58.699 [2024-04-23 21:30:52.946649] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:58.699 21:30:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.699 21:30:52 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.699 21:30:52 -- common/autotest_common.sh@638 -- # local es=0 00:29:58.699 21:30:52 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.699 21:30:52 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:58.699 21:30:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.699 21:30:52 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:58.699 21:30:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.699 21:30:52 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.699 21:30:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.699 21:30:52 -- common/autotest_common.sh@10 -- # set +x 00:29:58.699 request: 00:29:58.699 { 00:29:58.699 "name": "nvme", 00:29:58.699 "trtype": "tcp", 00:29:58.699 "traddr": "10.0.0.2", 00:29:58.699 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:58.699 "adrfam": "ipv4", 00:29:58.699 "trsvcid": "8009", 00:29:58.700 "wait_for_attach": true, 00:29:58.700 "method": "bdev_nvme_start_discovery", 00:29:58.700 "req_id": 1 00:29:58.700 } 00:29:58.700 Got JSON-RPC error response 00:29:58.700 response: 00:29:58.700 { 00:29:58.700 "code": -17, 00:29:58.700 "message": "File exists" 00:29:58.700 } 00:29:58.700 21:30:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:58.700 21:30:52 -- common/autotest_common.sh@641 -- # es=1 00:29:58.700 21:30:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:58.700 21:30:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:58.700 21:30:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:58.700 21:30:52 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:58.700 21:30:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:58.700 21:30:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:58.700 21:30:52 -- host/discovery.sh@67 -- # sort 00:29:58.700 21:30:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.700 21:30:52 -- common/autotest_common.sh@10 -- # set +x 00:29:58.700 21:30:52 -- host/discovery.sh@67 -- # xargs 00:29:58.959 21:30:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:58.959 21:30:53 -- host/discovery.sh@146 -- # get_bdev_list 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:58.959 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # sort 00:29:58.959 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # xargs 00:29:58.959 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.959 21:30:53 -- common/autotest_common.sh@638 -- # local es=0 00:29:58.959 21:30:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.959 21:30:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.959 21:30:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:58.959 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.959 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:29:58.959 request: 00:29:58.959 { 00:29:58.959 "name": "nvme_second", 00:29:58.959 "trtype": "tcp", 00:29:58.959 "traddr": "10.0.0.2", 00:29:58.959 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:58.959 "adrfam": "ipv4", 00:29:58.959 "trsvcid": "8009", 00:29:58.959 "wait_for_attach": true, 00:29:58.959 "method": "bdev_nvme_start_discovery", 00:29:58.959 "req_id": 1 00:29:58.959 } 00:29:58.959 Got JSON-RPC error response 00:29:58.959 response: 00:29:58.959 { 00:29:58.959 "code": -17, 00:29:58.959 "message": "File exists" 00:29:58.959 } 00:29:58.959 21:30:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:58.959 21:30:53 -- common/autotest_common.sh@641 -- # es=1 00:29:58.959 21:30:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:58.959 21:30:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:58.959 21:30:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:58.959 21:30:53 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:58.959 21:30:53 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:58.959 21:30:53 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:58.959 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.959 21:30:53 -- host/discovery.sh@67 -- # sort 00:29:58.959 21:30:53 -- host/discovery.sh@67 -- # xargs 00:29:58.959 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:29:58.959 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:58.959 21:30:53 -- host/discovery.sh@152 -- # get_bdev_list 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.959 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.959 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # sort 00:29:58.959 21:30:53 -- host/discovery.sh@55 -- # xargs 00:29:58.959 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:58.959 21:30:53 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:58.959 21:30:53 -- common/autotest_common.sh@638 -- # local es=0 00:29:58.959 21:30:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:58.959 21:30:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:58.959 21:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:58.959 21:30:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:58.959 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:58.959 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:29:59.894 [2024-04-23 21:30:54.155394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.894 [2024-04-23 21:30:54.155847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.894 [2024-04-23 21:30:54.155862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=8010 00:29:59.894 [2024-04-23 21:30:54.155894] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:59.894 [2024-04-23 21:30:54.155903] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:59.894 [2024-04-23 21:30:54.155912] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:01.274 [2024-04-23 21:30:55.155529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.274 [2024-04-23 21:30:55.155923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.274 [2024-04-23 21:30:55.155934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010240 with addr=10.0.0.2, port=8010 00:30:01.274 [2024-04-23 21:30:55.155961] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:01.274 [2024-04-23 21:30:55.155969] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:01.274 [2024-04-23 21:30:55.155976] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:02.211 [2024-04-23 21:30:56.154931] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:02.211 request: 00:30:02.211 { 00:30:02.211 "name": "nvme_second", 00:30:02.211 "trtype": "tcp", 00:30:02.211 "traddr": "10.0.0.2", 00:30:02.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:02.211 "adrfam": "ipv4", 00:30:02.211 "trsvcid": "8010", 00:30:02.211 "attach_timeout_ms": 3000, 00:30:02.211 "method": "bdev_nvme_start_discovery", 00:30:02.211 "req_id": 1 00:30:02.211 } 00:30:02.211 Got JSON-RPC error response 00:30:02.211 response: 00:30:02.211 { 00:30:02.211 "code": -110, 00:30:02.211 "message": "Connection timed out" 00:30:02.211 } 00:30:02.211 21:30:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:02.211 21:30:56 -- common/autotest_common.sh@641 -- # es=1 00:30:02.211 21:30:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:02.211 21:30:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:02.211 21:30:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:02.211 21:30:56 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:02.211 21:30:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:02.211 21:30:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:02.211 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.211 21:30:56 -- host/discovery.sh@67 -- # sort 00:30:02.211 21:30:56 -- host/discovery.sh@67 -- # xargs 00:30:02.211 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:02.211 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.211 21:30:56 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:02.211 21:30:56 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:02.211 21:30:56 -- host/discovery.sh@161 -- # kill 1617698 00:30:02.211 21:30:56 -- host/discovery.sh@162 -- # nvmftestfini 00:30:02.211 21:30:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:02.211 21:30:56 -- nvmf/common.sh@117 -- # sync 00:30:02.211 21:30:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.211 21:30:56 -- nvmf/common.sh@120 -- # set +e 00:30:02.211 21:30:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.211 21:30:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.211 rmmod nvme_tcp 00:30:02.211 rmmod nvme_fabrics 00:30:02.211 rmmod nvme_keyring 00:30:02.211 21:30:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.211 21:30:56 -- nvmf/common.sh@124 -- # set -e 00:30:02.211 21:30:56 -- nvmf/common.sh@125 -- # return 0 00:30:02.211 21:30:56 -- nvmf/common.sh@478 -- # '[' -n 1617394 ']' 00:30:02.211 21:30:56 -- nvmf/common.sh@479 -- # killprocess 1617394 00:30:02.211 21:30:56 -- common/autotest_common.sh@936 -- # '[' -z 1617394 ']' 00:30:02.211 21:30:56 -- common/autotest_common.sh@940 -- # kill -0 1617394 00:30:02.211 21:30:56 -- common/autotest_common.sh@941 -- # uname 00:30:02.211 21:30:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:02.212 21:30:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1617394 00:30:02.212 21:30:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:02.212 21:30:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:02.212 21:30:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1617394' 00:30:02.212 killing process with pid 1617394 00:30:02.212 21:30:56 -- common/autotest_common.sh@955 -- # kill 1617394 00:30:02.212 21:30:56 -- common/autotest_common.sh@960 -- # wait 1617394 00:30:02.472 21:30:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:02.472 21:30:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:02.472 21:30:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:02.472 21:30:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.472 21:30:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.472 21:30:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.472 21:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.472 21:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.008 21:30:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:05.008 00:30:05.008 real 0m18.106s 00:30:05.008 user 0m21.718s 00:30:05.008 sys 0m5.726s 00:30:05.008 21:30:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:05.008 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:30:05.008 ************************************ 00:30:05.008 END TEST nvmf_discovery 00:30:05.008 ************************************ 00:30:05.008 21:30:58 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:05.008 21:30:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:05.008 21:30:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:05.008 21:30:58 -- common/autotest_common.sh@10 -- # set +x 00:30:05.008 ************************************ 00:30:05.008 START TEST nvmf_discovery_remove_ifc 00:30:05.008 ************************************ 00:30:05.008 21:30:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:05.008 * Looking for test storage... 00:30:05.008 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:05.008 21:30:58 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.008 21:30:58 -- nvmf/common.sh@7 -- # uname -s 00:30:05.008 21:30:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.008 21:30:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.008 21:30:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.008 21:30:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.008 21:30:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.008 21:30:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.008 21:30:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.008 21:30:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.008 21:30:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.008 21:30:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.008 21:30:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:05.008 21:30:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:05.008 21:30:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.008 21:30:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.008 21:30:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:05.008 21:30:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.008 21:30:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:05.008 21:30:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.008 21:30:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.008 21:30:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.008 21:30:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.008 21:30:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.008 21:30:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.008 21:30:59 -- paths/export.sh@5 -- # export PATH 00:30:05.008 21:30:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.008 21:30:59 -- nvmf/common.sh@47 -- # : 0 00:30:05.008 21:30:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:05.008 21:30:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:05.008 21:30:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.008 21:30:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.008 21:30:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.008 21:30:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:05.008 21:30:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:05.008 21:30:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:05.008 21:30:59 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:05.008 21:30:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:05.008 21:30:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.008 21:30:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:05.008 21:30:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:05.008 21:30:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:05.008 21:30:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.008 21:30:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.008 21:30:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.008 21:30:59 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:30:05.008 21:30:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:05.008 21:30:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.008 21:30:59 -- common/autotest_common.sh@10 -- # set +x 00:30:10.284 21:31:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:10.284 21:31:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:10.284 21:31:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:10.284 21:31:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:10.284 21:31:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:10.284 21:31:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:10.284 21:31:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:10.284 21:31:03 -- nvmf/common.sh@295 -- # net_devs=() 00:30:10.284 21:31:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:10.284 21:31:03 -- nvmf/common.sh@296 -- # e810=() 00:30:10.284 21:31:03 -- nvmf/common.sh@296 -- # local -ga e810 00:30:10.284 21:31:03 -- nvmf/common.sh@297 -- # x722=() 00:30:10.284 21:31:03 -- nvmf/common.sh@297 -- # local -ga x722 00:30:10.284 21:31:03 -- nvmf/common.sh@298 -- # mlx=() 00:30:10.284 21:31:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:10.284 21:31:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.284 21:31:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:10.284 21:31:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.284 21:31:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:10.284 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:10.284 21:31:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.284 21:31:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:10.284 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:10.284 21:31:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.284 21:31:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.284 21:31:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.284 21:31:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:10.284 Found net devices under 0000:27:00.0: cvl_0_0 00:30:10.284 21:31:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.284 21:31:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.284 21:31:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.284 21:31:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.284 21:31:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:10.284 Found net devices under 0000:27:00.1: cvl_0_1 00:30:10.284 21:31:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.284 21:31:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:10.284 21:31:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:10.284 21:31:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:10.284 21:31:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.284 21:31:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.284 21:31:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.284 21:31:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:10.284 21:31:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.284 21:31:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.284 21:31:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:10.284 21:31:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.284 21:31:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.284 21:31:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:10.284 21:31:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:10.284 21:31:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.284 21:31:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.284 21:31:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.285 21:31:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.285 21:31:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:10.285 21:31:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.285 21:31:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.285 21:31:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.285 21:31:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:30:10.285 00:30:10.285 --- 10.0.0.2 ping statistics --- 00:30:10.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.285 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:30:10.285 21:31:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:30:10.285 00:30:10.285 --- 10.0.0.1 ping statistics --- 00:30:10.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.285 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:10.285 21:31:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.285 21:31:04 -- nvmf/common.sh@411 -- # return 0 00:30:10.285 21:31:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:10.285 21:31:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.285 21:31:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:10.285 21:31:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:10.285 21:31:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.285 21:31:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:10.285 21:31:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:10.285 21:31:04 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:10.285 21:31:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:10.285 21:31:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:10.285 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:30:10.285 21:31:04 -- nvmf/common.sh@470 -- # nvmfpid=1623245 00:30:10.285 21:31:04 -- nvmf/common.sh@471 -- # waitforlisten 1623245 00:30:10.285 21:31:04 -- common/autotest_common.sh@817 -- # '[' -z 1623245 ']' 00:30:10.285 21:31:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.285 21:31:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.285 21:31:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:10.285 21:31:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.285 21:31:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.285 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:30:10.285 [2024-04-23 21:31:04.130620] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:30:10.285 [2024-04-23 21:31:04.130692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.285 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.285 [2024-04-23 21:31:04.220368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.285 [2024-04-23 21:31:04.326182] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.285 [2024-04-23 21:31:04.326215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.285 [2024-04-23 21:31:04.326224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.285 [2024-04-23 21:31:04.326234] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.285 [2024-04-23 21:31:04.326240] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.285 [2024-04-23 21:31:04.326265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.852 21:31:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:10.852 21:31:04 -- common/autotest_common.sh@850 -- # return 0 00:30:10.852 21:31:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:10.852 21:31:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:10.852 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:30:10.852 21:31:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.852 21:31:04 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:10.852 21:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:10.852 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:30:10.852 [2024-04-23 21:31:04.871620] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.852 [2024-04-23 21:31:04.879787] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:10.852 null0 00:30:10.852 [2024-04-23 21:31:04.911709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.852 21:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:10.852 21:31:04 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1623550 00:30:10.852 21:31:04 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1623550 /tmp/host.sock 00:30:10.852 21:31:04 -- common/autotest_common.sh@817 -- # '[' -z 1623550 ']' 00:30:10.852 21:31:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:30:10.852 21:31:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:10.852 21:31:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:10.852 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:10.852 21:31:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:10.852 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:30:10.852 21:31:04 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:10.852 [2024-04-23 21:31:05.007373] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:30:10.852 [2024-04-23 21:31:05.007476] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623550 ] 00:30:10.852 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.852 [2024-04-23 21:31:05.118586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.113 [2024-04-23 21:31:05.207370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.683 21:31:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:11.683 21:31:05 -- common/autotest_common.sh@850 -- # return 0 00:30:11.683 21:31:05 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.683 21:31:05 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:11.683 21:31:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.683 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:30:11.683 21:31:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.683 21:31:05 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:11.683 21:31:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.683 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:30:11.683 21:31:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:11.683 21:31:05 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:11.683 21:31:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:11.683 21:31:05 -- common/autotest_common.sh@10 -- # set +x 00:30:13.061 [2024-04-23 21:31:06.916599] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:13.061 [2024-04-23 21:31:06.916632] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:13.061 [2024-04-23 21:31:06.916652] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:13.061 [2024-04-23 21:31:07.045850] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:13.061 [2024-04-23 21:31:07.229301] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:13.061 [2024-04-23 21:31:07.229360] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:13.061 [2024-04-23 21:31:07.229398] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:13.061 [2024-04-23 21:31:07.229417] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:13.061 [2024-04-23 21:31:07.229442] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:13.061 21:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:13.061 [2024-04-23 21:31:07.234776] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:30:13.061 21:31:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:13.061 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.061 21:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:13.061 21:31:07 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:13.322 21:31:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:13.322 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:13.322 21:31:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:13.322 21:31:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:14.263 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:14.263 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:30:14.263 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:14.263 21:31:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:15.637 21:31:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:15.637 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:30:15.637 21:31:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:15.637 21:31:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:16.576 21:31:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:16.576 21:31:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:16.576 21:31:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.577 21:31:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:16.577 21:31:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.577 21:31:10 -- common/autotest_common.sh@10 -- # set +x 00:30:16.577 21:31:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:16.577 21:31:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.577 21:31:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:16.577 21:31:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:17.513 21:31:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:17.513 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:30:17.513 21:31:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:17.513 21:31:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:18.451 21:31:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:18.451 21:31:12 -- common/autotest_common.sh@10 -- # set +x 00:30:18.451 [2024-04-23 21:31:12.657222] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:18.451 [2024-04-23 21:31:12.657284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.451 [2024-04-23 21:31:12.657298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.451 [2024-04-23 21:31:12.657311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.451 [2024-04-23 21:31:12.657324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.451 [2024-04-23 21:31:12.657333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.451 [2024-04-23 21:31:12.657341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.451 [2024-04-23 21:31:12.657349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.451 [2024-04-23 21:31:12.657357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.451 [2024-04-23 21:31:12.657365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.451 [2024-04-23 21:31:12.657373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.451 [2024-04-23 21:31:12.657381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:30:18.451 21:31:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.451 [2024-04-23 21:31:12.667217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:30:18.451 [2024-04-23 21:31:12.677236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:18.451 21:31:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:19.829 21:31:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:19.829 21:31:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.829 21:31:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:19.829 21:31:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.829 21:31:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:19.829 21:31:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:19.829 21:31:13 -- common/autotest_common.sh@10 -- # set +x 00:30:19.829 [2024-04-23 21:31:13.736667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:20.770 [2024-04-23 21:31:14.760660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:20.770 [2024-04-23 21:31:14.760731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:30:20.770 [2024-04-23 21:31:14.760758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:30:20.770 [2024-04-23 21:31:14.761391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:30:20.770 [2024-04-23 21:31:14.761426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:20.770 [2024-04-23 21:31:14.761470] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:20.770 [2024-04-23 21:31:14.761511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.770 [2024-04-23 21:31:14.761533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.770 [2024-04-23 21:31:14.761558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.770 [2024-04-23 21:31:14.761573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.770 [2024-04-23 21:31:14.761588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.770 [2024-04-23 21:31:14.761603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.770 [2024-04-23 21:31:14.761619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.770 [2024-04-23 21:31:14.761652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.770 [2024-04-23 21:31:14.761669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.770 [2024-04-23 21:31:14.761683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.770 [2024-04-23 21:31:14.761696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:20.770 [2024-04-23 21:31:14.761810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:30:20.770 [2024-04-23 21:31:14.762885] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:20.770 [2024-04-23 21:31:14.762904] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:20.770 21:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.770 21:31:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:20.770 21:31:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:21.705 21:31:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.705 21:31:15 -- common/autotest_common.sh@10 -- # set +x 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:21.705 21:31:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:21.705 21:31:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:21.705 21:31:15 -- common/autotest_common.sh@10 -- # set +x 00:30:21.705 21:31:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:21.705 21:31:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:22.647 [2024-04-23 21:31:16.808923] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:22.647 [2024-04-23 21:31:16.808949] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:22.647 [2024-04-23 21:31:16.808977] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:22.647 [2024-04-23 21:31:16.897034] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:22.906 [2024-04-23 21:31:16.957653] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:22.906 [2024-04-23 21:31:16.957705] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:22.906 [2024-04-23 21:31:16.957738] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:22.906 [2024-04-23 21:31:16.957757] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:22.906 [2024-04-23 21:31:16.957768] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.906 21:31:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:22.906 21:31:16 -- common/autotest_common.sh@10 -- # set +x 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:22.906 [2024-04-23 21:31:16.966401] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:30:22.906 21:31:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:22.906 21:31:16 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1623550 00:30:22.906 21:31:17 -- common/autotest_common.sh@936 -- # '[' -z 1623550 ']' 00:30:22.906 21:31:17 -- common/autotest_common.sh@940 -- # kill -0 1623550 00:30:22.906 21:31:17 -- common/autotest_common.sh@941 -- # uname 00:30:22.906 21:31:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:22.906 21:31:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1623550 00:30:22.906 21:31:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:22.906 21:31:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:22.906 21:31:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1623550' 00:30:22.907 killing process with pid 1623550 00:30:22.907 21:31:17 -- common/autotest_common.sh@955 -- # kill 1623550 00:30:22.907 21:31:17 -- common/autotest_common.sh@960 -- # wait 1623550 00:30:23.165 21:31:17 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:23.165 21:31:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:23.165 21:31:17 -- nvmf/common.sh@117 -- # sync 00:30:23.165 21:31:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.165 21:31:17 -- nvmf/common.sh@120 -- # set +e 00:30:23.165 21:31:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.165 21:31:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.165 rmmod nvme_tcp 00:30:23.423 rmmod nvme_fabrics 00:30:23.423 rmmod nvme_keyring 00:30:23.423 21:31:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.423 21:31:17 -- nvmf/common.sh@124 -- # set -e 00:30:23.423 21:31:17 -- nvmf/common.sh@125 -- # return 0 00:30:23.423 21:31:17 -- nvmf/common.sh@478 -- # '[' -n 1623245 ']' 00:30:23.423 21:31:17 -- nvmf/common.sh@479 -- # killprocess 1623245 00:30:23.423 21:31:17 -- common/autotest_common.sh@936 -- # '[' -z 1623245 ']' 00:30:23.423 21:31:17 -- common/autotest_common.sh@940 -- # kill -0 1623245 00:30:23.423 21:31:17 -- common/autotest_common.sh@941 -- # uname 00:30:23.423 21:31:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:23.423 21:31:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1623245 00:30:23.424 21:31:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:23.424 21:31:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:23.424 21:31:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1623245' 00:30:23.424 killing process with pid 1623245 00:30:23.424 21:31:17 -- common/autotest_common.sh@955 -- # kill 1623245 00:30:23.424 21:31:17 -- common/autotest_common.sh@960 -- # wait 1623245 00:30:23.992 21:31:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:23.992 21:31:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:23.992 21:31:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:23.992 21:31:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.992 21:31:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.992 21:31:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.992 21:31:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.992 21:31:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.899 21:31:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.899 00:30:25.899 real 0m21.080s 00:30:25.899 user 0m25.805s 00:30:25.899 sys 0m4.926s 00:30:25.899 21:31:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:25.899 21:31:20 -- common/autotest_common.sh@10 -- # set +x 00:30:25.899 ************************************ 00:30:25.899 END TEST nvmf_discovery_remove_ifc 00:30:25.899 ************************************ 00:30:25.899 21:31:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:25.899 21:31:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:25.899 21:31:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:25.899 21:31:20 -- common/autotest_common.sh@10 -- # set +x 00:30:25.899 ************************************ 00:30:25.899 START TEST nvmf_identify_kernel_target 00:30:25.899 ************************************ 00:30:25.899 21:31:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:26.160 * Looking for test storage... 00:30:26.160 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:26.160 21:31:20 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.160 21:31:20 -- nvmf/common.sh@7 -- # uname -s 00:30:26.160 21:31:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.160 21:31:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.160 21:31:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.160 21:31:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.160 21:31:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.160 21:31:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.160 21:31:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.160 21:31:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.160 21:31:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.160 21:31:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.160 21:31:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:26.160 21:31:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:26.160 21:31:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.160 21:31:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.160 21:31:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:26.160 21:31:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.160 21:31:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:26.160 21:31:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.160 21:31:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.160 21:31:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.160 21:31:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.160 21:31:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.160 21:31:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.160 21:31:20 -- paths/export.sh@5 -- # export PATH 00:30:26.160 21:31:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.160 21:31:20 -- nvmf/common.sh@47 -- # : 0 00:30:26.160 21:31:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.160 21:31:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.160 21:31:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.160 21:31:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.160 21:31:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.160 21:31:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.160 21:31:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.160 21:31:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.160 21:31:20 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:26.160 21:31:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:26.160 21:31:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.160 21:31:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:26.160 21:31:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:26.160 21:31:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:26.160 21:31:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.160 21:31:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.160 21:31:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.160 21:31:20 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:30:26.160 21:31:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:26.160 21:31:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.160 21:31:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.434 21:31:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:31.434 21:31:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:31.434 21:31:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:31.434 21:31:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:31.434 21:31:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:31.434 21:31:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:31.434 21:31:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:31.434 21:31:25 -- nvmf/common.sh@295 -- # net_devs=() 00:30:31.434 21:31:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:31.434 21:31:25 -- nvmf/common.sh@296 -- # e810=() 00:30:31.434 21:31:25 -- nvmf/common.sh@296 -- # local -ga e810 00:30:31.434 21:31:25 -- nvmf/common.sh@297 -- # x722=() 00:30:31.434 21:31:25 -- nvmf/common.sh@297 -- # local -ga x722 00:30:31.434 21:31:25 -- nvmf/common.sh@298 -- # mlx=() 00:30:31.434 21:31:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:31.434 21:31:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.434 21:31:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:31.434 21:31:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:31.434 21:31:25 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:30:31.434 21:31:25 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.435 21:31:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:31.435 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:31.435 21:31:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.435 21:31:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:31.435 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:31.435 21:31:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.435 21:31:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.435 21:31:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.435 21:31:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:31.435 Found net devices under 0000:27:00.0: cvl_0_0 00:30:31.435 21:31:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.435 21:31:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.435 21:31:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.435 21:31:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.435 21:31:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:31.435 Found net devices under 0000:27:00.1: cvl_0_1 00:30:31.435 21:31:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.435 21:31:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:31.435 21:31:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:31.435 21:31:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.435 21:31:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.435 21:31:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.435 21:31:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:31.435 21:31:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.435 21:31:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.435 21:31:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:31.435 21:31:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.435 21:31:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.435 21:31:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:31.435 21:31:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:31.435 21:31:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.435 21:31:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.435 21:31:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.435 21:31:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.435 21:31:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:31.435 21:31:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.435 21:31:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.435 21:31:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.435 21:31:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:31.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:30:31.435 00:30:31.435 --- 10.0.0.2 ping statistics --- 00:30:31.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.435 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:30:31.435 21:31:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.617 ms 00:30:31.435 00:30:31.435 --- 10.0.0.1 ping statistics --- 00:30:31.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.435 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:30:31.435 21:31:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.435 21:31:25 -- nvmf/common.sh@411 -- # return 0 00:30:31.435 21:31:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:31.435 21:31:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.435 21:31:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:31.435 21:31:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.435 21:31:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:31.435 21:31:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:31.435 21:31:25 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:31.693 21:31:25 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:31.693 21:31:25 -- nvmf/common.sh@717 -- # local ip 00:30:31.693 21:31:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:31.693 21:31:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:31.693 21:31:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.693 21:31:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.693 21:31:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:31.693 21:31:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.693 21:31:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:31.693 21:31:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:31.693 21:31:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:31.693 21:31:25 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:31.694 21:31:25 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:31.694 21:31:25 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:31.694 21:31:25 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:31.694 21:31:25 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:31.694 21:31:25 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:31.694 21:31:25 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:31.694 21:31:25 -- nvmf/common.sh@628 -- # local block nvme 00:30:31.694 21:31:25 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:31.694 21:31:25 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:31.694 21:31:25 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:31.694 21:31:25 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:30:34.229 Waiting for block devices as requested 00:30:34.229 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:30:34.229 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.487 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.487 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.487 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:30:34.487 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.746 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:30:34.746 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.746 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:30:34.746 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:34.746 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:30:35.004 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:30:35.004 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:30:35.004 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:35.004 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:30:35.267 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:35.267 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:30:35.267 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:30:35.609 21:31:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:35.609 21:31:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:35.609 21:31:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:35.609 21:31:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:35.609 21:31:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:35.609 21:31:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:35.609 21:31:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:35.609 21:31:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:35.609 21:31:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:35.609 No valid GPT data, bailing 00:30:35.609 21:31:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:35.609 21:31:29 -- scripts/common.sh@391 -- # pt= 00:30:35.609 21:31:29 -- scripts/common.sh@392 -- # return 1 00:30:35.609 21:31:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:35.609 21:31:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:35.609 21:31:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:35.609 21:31:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:30:35.609 21:31:29 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:35.609 21:31:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:35.609 21:31:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:35.609 21:31:29 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:30:35.609 21:31:29 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:35.609 21:31:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:30:35.609 No valid GPT data, bailing 00:30:35.609 21:31:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:35.609 21:31:29 -- scripts/common.sh@391 -- # pt= 00:30:35.609 21:31:29 -- scripts/common.sh@392 -- # return 1 00:30:35.609 21:31:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:30:35.609 21:31:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:30:35.609 21:31:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.609 21:31:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:35.609 21:31:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:35.609 21:31:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:35.609 21:31:29 -- nvmf/common.sh@656 -- # echo 1 00:30:35.609 21:31:29 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:30:35.609 21:31:29 -- nvmf/common.sh@658 -- # echo 1 00:30:35.609 21:31:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:35.609 21:31:29 -- nvmf/common.sh@661 -- # echo tcp 00:30:35.609 21:31:29 -- nvmf/common.sh@662 -- # echo 4420 00:30:35.609 21:31:29 -- nvmf/common.sh@663 -- # echo ipv4 00:30:35.609 21:31:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:35.609 21:31:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:30:35.609 00:30:35.609 Discovery Log Number of Records 2, Generation counter 2 00:30:35.609 =====Discovery Log Entry 0====== 00:30:35.609 trtype: tcp 00:30:35.609 adrfam: ipv4 00:30:35.609 subtype: current discovery subsystem 00:30:35.609 treq: not specified, sq flow control disable supported 00:30:35.609 portid: 1 00:30:35.609 trsvcid: 4420 00:30:35.609 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:35.609 traddr: 10.0.0.1 00:30:35.609 eflags: none 00:30:35.609 sectype: none 00:30:35.609 =====Discovery Log Entry 1====== 00:30:35.609 trtype: tcp 00:30:35.609 adrfam: ipv4 00:30:35.609 subtype: nvme subsystem 00:30:35.609 treq: not specified, sq flow control disable supported 00:30:35.609 portid: 1 00:30:35.609 trsvcid: 4420 00:30:35.609 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:35.609 traddr: 10.0.0.1 00:30:35.609 eflags: none 00:30:35.609 sectype: none 00:30:35.609 21:31:29 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:35.609 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:35.920 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.920 ===================================================== 00:30:35.920 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:35.920 ===================================================== 00:30:35.920 Controller Capabilities/Features 00:30:35.920 ================================ 00:30:35.920 Vendor ID: 0000 00:30:35.920 Subsystem Vendor ID: 0000 00:30:35.920 Serial Number: 8059db13d4756a093cb3 00:30:35.920 Model Number: Linux 00:30:35.920 Firmware Version: 6.7.0-68 00:30:35.920 Recommended Arb Burst: 0 00:30:35.920 IEEE OUI Identifier: 00 00 00 00:30:35.920 Multi-path I/O 00:30:35.920 May have multiple subsystem ports: No 00:30:35.920 May have multiple controllers: No 00:30:35.920 Associated with SR-IOV VF: No 00:30:35.920 Max Data Transfer Size: Unlimited 00:30:35.920 Max Number of Namespaces: 0 00:30:35.920 Max Number of I/O Queues: 1024 00:30:35.920 NVMe Specification Version (VS): 1.3 00:30:35.920 NVMe Specification Version (Identify): 1.3 00:30:35.920 Maximum Queue Entries: 1024 00:30:35.920 Contiguous Queues Required: No 00:30:35.920 Arbitration Mechanisms Supported 00:30:35.920 Weighted Round Robin: Not Supported 00:30:35.920 Vendor Specific: Not Supported 00:30:35.920 Reset Timeout: 7500 ms 00:30:35.920 Doorbell Stride: 4 bytes 00:30:35.920 NVM Subsystem Reset: Not Supported 00:30:35.920 Command Sets Supported 00:30:35.920 NVM Command Set: Supported 00:30:35.920 Boot Partition: Not Supported 00:30:35.920 Memory Page Size Minimum: 4096 bytes 00:30:35.920 Memory Page Size Maximum: 4096 bytes 00:30:35.920 Persistent Memory Region: Not Supported 00:30:35.920 Optional Asynchronous Events Supported 00:30:35.920 Namespace Attribute Notices: Not Supported 00:30:35.920 Firmware Activation Notices: Not Supported 00:30:35.920 ANA Change Notices: Not Supported 00:30:35.920 PLE Aggregate Log Change Notices: Not Supported 00:30:35.920 LBA Status Info Alert Notices: Not Supported 00:30:35.920 EGE Aggregate Log Change Notices: Not Supported 00:30:35.920 Normal NVM Subsystem Shutdown event: Not Supported 00:30:35.920 Zone Descriptor Change Notices: Not Supported 00:30:35.920 Discovery Log Change Notices: Supported 00:30:35.920 Controller Attributes 00:30:35.920 128-bit Host Identifier: Not Supported 00:30:35.920 Non-Operational Permissive Mode: Not Supported 00:30:35.920 NVM Sets: Not Supported 00:30:35.920 Read Recovery Levels: Not Supported 00:30:35.920 Endurance Groups: Not Supported 00:30:35.920 Predictable Latency Mode: Not Supported 00:30:35.920 Traffic Based Keep ALive: Not Supported 00:30:35.920 Namespace Granularity: Not Supported 00:30:35.920 SQ Associations: Not Supported 00:30:35.920 UUID List: Not Supported 00:30:35.920 Multi-Domain Subsystem: Not Supported 00:30:35.920 Fixed Capacity Management: Not Supported 00:30:35.920 Variable Capacity Management: Not Supported 00:30:35.920 Delete Endurance Group: Not Supported 00:30:35.920 Delete NVM Set: Not Supported 00:30:35.920 Extended LBA Formats Supported: Not Supported 00:30:35.920 Flexible Data Placement Supported: Not Supported 00:30:35.920 00:30:35.920 Controller Memory Buffer Support 00:30:35.920 ================================ 00:30:35.920 Supported: No 00:30:35.920 00:30:35.920 Persistent Memory Region Support 00:30:35.920 ================================ 00:30:35.920 Supported: No 00:30:35.920 00:30:35.920 Admin Command Set Attributes 00:30:35.920 ============================ 00:30:35.920 Security Send/Receive: Not Supported 00:30:35.920 Format NVM: Not Supported 00:30:35.920 Firmware Activate/Download: Not Supported 00:30:35.920 Namespace Management: Not Supported 00:30:35.920 Device Self-Test: Not Supported 00:30:35.920 Directives: Not Supported 00:30:35.920 NVMe-MI: Not Supported 00:30:35.920 Virtualization Management: Not Supported 00:30:35.920 Doorbell Buffer Config: Not Supported 00:30:35.920 Get LBA Status Capability: Not Supported 00:30:35.920 Command & Feature Lockdown Capability: Not Supported 00:30:35.920 Abort Command Limit: 1 00:30:35.920 Async Event Request Limit: 1 00:30:35.920 Number of Firmware Slots: N/A 00:30:35.920 Firmware Slot 1 Read-Only: N/A 00:30:35.920 Firmware Activation Without Reset: N/A 00:30:35.920 Multiple Update Detection Support: N/A 00:30:35.920 Firmware Update Granularity: No Information Provided 00:30:35.920 Per-Namespace SMART Log: No 00:30:35.920 Asymmetric Namespace Access Log Page: Not Supported 00:30:35.920 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:35.920 Command Effects Log Page: Not Supported 00:30:35.920 Get Log Page Extended Data: Supported 00:30:35.920 Telemetry Log Pages: Not Supported 00:30:35.920 Persistent Event Log Pages: Not Supported 00:30:35.920 Supported Log Pages Log Page: May Support 00:30:35.920 Commands Supported & Effects Log Page: Not Supported 00:30:35.920 Feature Identifiers & Effects Log Page:May Support 00:30:35.920 NVMe-MI Commands & Effects Log Page: May Support 00:30:35.920 Data Area 4 for Telemetry Log: Not Supported 00:30:35.920 Error Log Page Entries Supported: 1 00:30:35.920 Keep Alive: Not Supported 00:30:35.920 00:30:35.920 NVM Command Set Attributes 00:30:35.920 ========================== 00:30:35.920 Submission Queue Entry Size 00:30:35.920 Max: 1 00:30:35.920 Min: 1 00:30:35.920 Completion Queue Entry Size 00:30:35.920 Max: 1 00:30:35.920 Min: 1 00:30:35.920 Number of Namespaces: 0 00:30:35.920 Compare Command: Not Supported 00:30:35.920 Write Uncorrectable Command: Not Supported 00:30:35.920 Dataset Management Command: Not Supported 00:30:35.920 Write Zeroes Command: Not Supported 00:30:35.920 Set Features Save Field: Not Supported 00:30:35.920 Reservations: Not Supported 00:30:35.920 Timestamp: Not Supported 00:30:35.920 Copy: Not Supported 00:30:35.920 Volatile Write Cache: Not Present 00:30:35.920 Atomic Write Unit (Normal): 1 00:30:35.920 Atomic Write Unit (PFail): 1 00:30:35.920 Atomic Compare & Write Unit: 1 00:30:35.920 Fused Compare & Write: Not Supported 00:30:35.920 Scatter-Gather List 00:30:35.920 SGL Command Set: Supported 00:30:35.920 SGL Keyed: Not Supported 00:30:35.920 SGL Bit Bucket Descriptor: Not Supported 00:30:35.920 SGL Metadata Pointer: Not Supported 00:30:35.920 Oversized SGL: Not Supported 00:30:35.920 SGL Metadata Address: Not Supported 00:30:35.920 SGL Offset: Supported 00:30:35.920 Transport SGL Data Block: Not Supported 00:30:35.920 Replay Protected Memory Block: Not Supported 00:30:35.920 00:30:35.920 Firmware Slot Information 00:30:35.920 ========================= 00:30:35.920 Active slot: 0 00:30:35.920 00:30:35.920 00:30:35.920 Error Log 00:30:35.920 ========= 00:30:35.920 00:30:35.920 Active Namespaces 00:30:35.920 ================= 00:30:35.920 Discovery Log Page 00:30:35.920 ================== 00:30:35.920 Generation Counter: 2 00:30:35.920 Number of Records: 2 00:30:35.920 Record Format: 0 00:30:35.920 00:30:35.920 Discovery Log Entry 0 00:30:35.920 ---------------------- 00:30:35.920 Transport Type: 3 (TCP) 00:30:35.920 Address Family: 1 (IPv4) 00:30:35.920 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:35.920 Entry Flags: 00:30:35.920 Duplicate Returned Information: 0 00:30:35.921 Explicit Persistent Connection Support for Discovery: 0 00:30:35.921 Transport Requirements: 00:30:35.921 Secure Channel: Not Specified 00:30:35.921 Port ID: 1 (0x0001) 00:30:35.921 Controller ID: 65535 (0xffff) 00:30:35.921 Admin Max SQ Size: 32 00:30:35.921 Transport Service Identifier: 4420 00:30:35.921 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:35.921 Transport Address: 10.0.0.1 00:30:35.921 Discovery Log Entry 1 00:30:35.921 ---------------------- 00:30:35.921 Transport Type: 3 (TCP) 00:30:35.921 Address Family: 1 (IPv4) 00:30:35.921 Subsystem Type: 2 (NVM Subsystem) 00:30:35.921 Entry Flags: 00:30:35.921 Duplicate Returned Information: 0 00:30:35.921 Explicit Persistent Connection Support for Discovery: 0 00:30:35.921 Transport Requirements: 00:30:35.921 Secure Channel: Not Specified 00:30:35.921 Port ID: 1 (0x0001) 00:30:35.921 Controller ID: 65535 (0xffff) 00:30:35.921 Admin Max SQ Size: 32 00:30:35.921 Transport Service Identifier: 4420 00:30:35.921 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:35.921 Transport Address: 10.0.0.1 00:30:35.921 21:31:29 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.921 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.921 get_feature(0x01) failed 00:30:35.921 get_feature(0x02) failed 00:30:35.921 get_feature(0x04) failed 00:30:35.921 ===================================================== 00:30:35.921 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.921 ===================================================== 00:30:35.921 Controller Capabilities/Features 00:30:35.921 ================================ 00:30:35.921 Vendor ID: 0000 00:30:35.921 Subsystem Vendor ID: 0000 00:30:35.921 Serial Number: faefa5b0eff645c0cff5 00:30:35.921 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:35.921 Firmware Version: 6.7.0-68 00:30:35.921 Recommended Arb Burst: 6 00:30:35.921 IEEE OUI Identifier: 00 00 00 00:30:35.921 Multi-path I/O 00:30:35.921 May have multiple subsystem ports: Yes 00:30:35.921 May have multiple controllers: Yes 00:30:35.921 Associated with SR-IOV VF: No 00:30:35.921 Max Data Transfer Size: Unlimited 00:30:35.921 Max Number of Namespaces: 1024 00:30:35.921 Max Number of I/O Queues: 128 00:30:35.921 NVMe Specification Version (VS): 1.3 00:30:35.921 NVMe Specification Version (Identify): 1.3 00:30:35.921 Maximum Queue Entries: 1024 00:30:35.921 Contiguous Queues Required: No 00:30:35.921 Arbitration Mechanisms Supported 00:30:35.921 Weighted Round Robin: Not Supported 00:30:35.921 Vendor Specific: Not Supported 00:30:35.921 Reset Timeout: 7500 ms 00:30:35.921 Doorbell Stride: 4 bytes 00:30:35.921 NVM Subsystem Reset: Not Supported 00:30:35.921 Command Sets Supported 00:30:35.921 NVM Command Set: Supported 00:30:35.921 Boot Partition: Not Supported 00:30:35.921 Memory Page Size Minimum: 4096 bytes 00:30:35.921 Memory Page Size Maximum: 4096 bytes 00:30:35.921 Persistent Memory Region: Not Supported 00:30:35.921 Optional Asynchronous Events Supported 00:30:35.921 Namespace Attribute Notices: Supported 00:30:35.921 Firmware Activation Notices: Not Supported 00:30:35.921 ANA Change Notices: Supported 00:30:35.921 PLE Aggregate Log Change Notices: Not Supported 00:30:35.921 LBA Status Info Alert Notices: Not Supported 00:30:35.921 EGE Aggregate Log Change Notices: Not Supported 00:30:35.921 Normal NVM Subsystem Shutdown event: Not Supported 00:30:35.921 Zone Descriptor Change Notices: Not Supported 00:30:35.921 Discovery Log Change Notices: Not Supported 00:30:35.921 Controller Attributes 00:30:35.921 128-bit Host Identifier: Supported 00:30:35.921 Non-Operational Permissive Mode: Not Supported 00:30:35.921 NVM Sets: Not Supported 00:30:35.921 Read Recovery Levels: Not Supported 00:30:35.921 Endurance Groups: Not Supported 00:30:35.921 Predictable Latency Mode: Not Supported 00:30:35.921 Traffic Based Keep ALive: Supported 00:30:35.921 Namespace Granularity: Not Supported 00:30:35.921 SQ Associations: Not Supported 00:30:35.921 UUID List: Not Supported 00:30:35.921 Multi-Domain Subsystem: Not Supported 00:30:35.921 Fixed Capacity Management: Not Supported 00:30:35.921 Variable Capacity Management: Not Supported 00:30:35.921 Delete Endurance Group: Not Supported 00:30:35.921 Delete NVM Set: Not Supported 00:30:35.921 Extended LBA Formats Supported: Not Supported 00:30:35.921 Flexible Data Placement Supported: Not Supported 00:30:35.921 00:30:35.921 Controller Memory Buffer Support 00:30:35.921 ================================ 00:30:35.921 Supported: No 00:30:35.921 00:30:35.921 Persistent Memory Region Support 00:30:35.921 ================================ 00:30:35.921 Supported: No 00:30:35.921 00:30:35.921 Admin Command Set Attributes 00:30:35.921 ============================ 00:30:35.921 Security Send/Receive: Not Supported 00:30:35.921 Format NVM: Not Supported 00:30:35.921 Firmware Activate/Download: Not Supported 00:30:35.921 Namespace Management: Not Supported 00:30:35.921 Device Self-Test: Not Supported 00:30:35.921 Directives: Not Supported 00:30:35.921 NVMe-MI: Not Supported 00:30:35.921 Virtualization Management: Not Supported 00:30:35.921 Doorbell Buffer Config: Not Supported 00:30:35.921 Get LBA Status Capability: Not Supported 00:30:35.921 Command & Feature Lockdown Capability: Not Supported 00:30:35.921 Abort Command Limit: 4 00:30:35.921 Async Event Request Limit: 4 00:30:35.921 Number of Firmware Slots: N/A 00:30:35.921 Firmware Slot 1 Read-Only: N/A 00:30:35.921 Firmware Activation Without Reset: N/A 00:30:35.921 Multiple Update Detection Support: N/A 00:30:35.921 Firmware Update Granularity: No Information Provided 00:30:35.921 Per-Namespace SMART Log: Yes 00:30:35.921 Asymmetric Namespace Access Log Page: Supported 00:30:35.921 ANA Transition Time : 10 sec 00:30:35.921 00:30:35.921 Asymmetric Namespace Access Capabilities 00:30:35.921 ANA Optimized State : Supported 00:30:35.921 ANA Non-Optimized State : Supported 00:30:35.921 ANA Inaccessible State : Supported 00:30:35.921 ANA Persistent Loss State : Supported 00:30:35.921 ANA Change State : Supported 00:30:35.921 ANAGRPID is not changed : No 00:30:35.921 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:35.921 00:30:35.921 ANA Group Identifier Maximum : 128 00:30:35.921 Number of ANA Group Identifiers : 128 00:30:35.921 Max Number of Allowed Namespaces : 1024 00:30:35.921 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:35.921 Command Effects Log Page: Supported 00:30:35.921 Get Log Page Extended Data: Supported 00:30:35.921 Telemetry Log Pages: Not Supported 00:30:35.921 Persistent Event Log Pages: Not Supported 00:30:35.921 Supported Log Pages Log Page: May Support 00:30:35.921 Commands Supported & Effects Log Page: Not Supported 00:30:35.921 Feature Identifiers & Effects Log Page:May Support 00:30:35.921 NVMe-MI Commands & Effects Log Page: May Support 00:30:35.921 Data Area 4 for Telemetry Log: Not Supported 00:30:35.921 Error Log Page Entries Supported: 128 00:30:35.921 Keep Alive: Supported 00:30:35.921 Keep Alive Granularity: 1000 ms 00:30:35.921 00:30:35.921 NVM Command Set Attributes 00:30:35.921 ========================== 00:30:35.921 Submission Queue Entry Size 00:30:35.921 Max: 64 00:30:35.921 Min: 64 00:30:35.921 Completion Queue Entry Size 00:30:35.921 Max: 16 00:30:35.921 Min: 16 00:30:35.921 Number of Namespaces: 1024 00:30:35.921 Compare Command: Not Supported 00:30:35.921 Write Uncorrectable Command: Not Supported 00:30:35.921 Dataset Management Command: Supported 00:30:35.921 Write Zeroes Command: Supported 00:30:35.921 Set Features Save Field: Not Supported 00:30:35.921 Reservations: Not Supported 00:30:35.921 Timestamp: Not Supported 00:30:35.921 Copy: Not Supported 00:30:35.921 Volatile Write Cache: Present 00:30:35.921 Atomic Write Unit (Normal): 1 00:30:35.921 Atomic Write Unit (PFail): 1 00:30:35.921 Atomic Compare & Write Unit: 1 00:30:35.921 Fused Compare & Write: Not Supported 00:30:35.921 Scatter-Gather List 00:30:35.921 SGL Command Set: Supported 00:30:35.921 SGL Keyed: Not Supported 00:30:35.921 SGL Bit Bucket Descriptor: Not Supported 00:30:35.921 SGL Metadata Pointer: Not Supported 00:30:35.921 Oversized SGL: Not Supported 00:30:35.921 SGL Metadata Address: Not Supported 00:30:35.921 SGL Offset: Supported 00:30:35.921 Transport SGL Data Block: Not Supported 00:30:35.921 Replay Protected Memory Block: Not Supported 00:30:35.921 00:30:35.921 Firmware Slot Information 00:30:35.921 ========================= 00:30:35.921 Active slot: 0 00:30:35.921 00:30:35.921 Asymmetric Namespace Access 00:30:35.921 =========================== 00:30:35.921 Change Count : 0 00:30:35.921 Number of ANA Group Descriptors : 1 00:30:35.921 ANA Group Descriptor : 0 00:30:35.921 ANA Group ID : 1 00:30:35.921 Number of NSID Values : 1 00:30:35.921 Change Count : 0 00:30:35.921 ANA State : 1 00:30:35.922 Namespace Identifier : 1 00:30:35.922 00:30:35.922 Commands Supported and Effects 00:30:35.922 ============================== 00:30:35.922 Admin Commands 00:30:35.922 -------------- 00:30:35.922 Get Log Page (02h): Supported 00:30:35.922 Identify (06h): Supported 00:30:35.922 Abort (08h): Supported 00:30:35.922 Set Features (09h): Supported 00:30:35.922 Get Features (0Ah): Supported 00:30:35.922 Asynchronous Event Request (0Ch): Supported 00:30:35.922 Keep Alive (18h): Supported 00:30:35.922 I/O Commands 00:30:35.922 ------------ 00:30:35.922 Flush (00h): Supported 00:30:35.922 Write (01h): Supported LBA-Change 00:30:35.922 Read (02h): Supported 00:30:35.922 Write Zeroes (08h): Supported LBA-Change 00:30:35.922 Dataset Management (09h): Supported 00:30:35.922 00:30:35.922 Error Log 00:30:35.922 ========= 00:30:35.922 Entry: 0 00:30:35.922 Error Count: 0x3 00:30:35.922 Submission Queue Id: 0x0 00:30:35.922 Command Id: 0x5 00:30:35.922 Phase Bit: 0 00:30:35.922 Status Code: 0x2 00:30:35.922 Status Code Type: 0x0 00:30:35.922 Do Not Retry: 1 00:30:35.922 Error Location: 0x28 00:30:35.922 LBA: 0x0 00:30:35.922 Namespace: 0x0 00:30:35.922 Vendor Log Page: 0x0 00:30:35.922 ----------- 00:30:35.922 Entry: 1 00:30:35.922 Error Count: 0x2 00:30:35.922 Submission Queue Id: 0x0 00:30:35.922 Command Id: 0x5 00:30:35.922 Phase Bit: 0 00:30:35.922 Status Code: 0x2 00:30:35.922 Status Code Type: 0x0 00:30:35.922 Do Not Retry: 1 00:30:35.922 Error Location: 0x28 00:30:35.922 LBA: 0x0 00:30:35.922 Namespace: 0x0 00:30:35.922 Vendor Log Page: 0x0 00:30:35.922 ----------- 00:30:35.922 Entry: 2 00:30:35.922 Error Count: 0x1 00:30:35.922 Submission Queue Id: 0x0 00:30:35.922 Command Id: 0x4 00:30:35.922 Phase Bit: 0 00:30:35.922 Status Code: 0x2 00:30:35.922 Status Code Type: 0x0 00:30:35.922 Do Not Retry: 1 00:30:35.922 Error Location: 0x28 00:30:35.922 LBA: 0x0 00:30:35.922 Namespace: 0x0 00:30:35.922 Vendor Log Page: 0x0 00:30:35.922 00:30:35.922 Number of Queues 00:30:35.922 ================ 00:30:35.922 Number of I/O Submission Queues: 128 00:30:35.922 Number of I/O Completion Queues: 128 00:30:35.922 00:30:35.922 ZNS Specific Controller Data 00:30:35.922 ============================ 00:30:35.922 Zone Append Size Limit: 0 00:30:35.922 00:30:35.922 00:30:35.922 Active Namespaces 00:30:35.922 ================= 00:30:35.922 get_feature(0x05) failed 00:30:35.922 Namespace ID:1 00:30:35.922 Command Set Identifier: NVM (00h) 00:30:35.922 Deallocate: Supported 00:30:35.922 Deallocated/Unwritten Error: Not Supported 00:30:35.922 Deallocated Read Value: Unknown 00:30:35.922 Deallocate in Write Zeroes: Not Supported 00:30:35.922 Deallocated Guard Field: 0xFFFF 00:30:35.922 Flush: Supported 00:30:35.922 Reservation: Not Supported 00:30:35.922 Namespace Sharing Capabilities: Multiple Controllers 00:30:35.922 Size (in LBAs): 1875385008 (894GiB) 00:30:35.922 Capacity (in LBAs): 1875385008 (894GiB) 00:30:35.922 Utilization (in LBAs): 1875385008 (894GiB) 00:30:35.922 UUID: 5cf211ba-a55e-4a99-9e77-e58f1671588b 00:30:35.922 Thin Provisioning: Not Supported 00:30:35.922 Per-NS Atomic Units: Yes 00:30:35.922 Atomic Write Unit (Normal): 8 00:30:35.922 Atomic Write Unit (PFail): 8 00:30:35.922 Preferred Write Granularity: 8 00:30:35.922 Atomic Compare & Write Unit: 8 00:30:35.922 Atomic Boundary Size (Normal): 0 00:30:35.922 Atomic Boundary Size (PFail): 0 00:30:35.922 Atomic Boundary Offset: 0 00:30:35.922 NGUID/EUI64 Never Reused: No 00:30:35.922 ANA group ID: 1 00:30:35.922 Namespace Write Protected: No 00:30:35.922 Number of LBA Formats: 1 00:30:35.922 Current LBA Format: LBA Format #00 00:30:35.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:35.922 00:30:35.922 21:31:30 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:35.922 21:31:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:35.922 21:31:30 -- nvmf/common.sh@117 -- # sync 00:30:35.922 21:31:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.922 21:31:30 -- nvmf/common.sh@120 -- # set +e 00:30:35.922 21:31:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.922 21:31:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.922 rmmod nvme_tcp 00:30:35.922 rmmod nvme_fabrics 00:30:35.922 21:31:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.922 21:31:30 -- nvmf/common.sh@124 -- # set -e 00:30:35.922 21:31:30 -- nvmf/common.sh@125 -- # return 0 00:30:35.922 21:31:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:30:35.922 21:31:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:35.922 21:31:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:35.922 21:31:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:35.922 21:31:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:35.922 21:31:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:35.922 21:31:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.922 21:31:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:35.922 21:31:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.467 21:31:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.467 21:31:32 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:38.467 21:31:32 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:38.467 21:31:32 -- nvmf/common.sh@675 -- # echo 0 00:30:38.467 21:31:32 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:38.467 21:31:32 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:38.467 21:31:32 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:38.467 21:31:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:38.467 21:31:32 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:38.467 21:31:32 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:38.467 21:31:32 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:30:40.998 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:30:40.998 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:30:40.998 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:30:41.564 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:30:41.823 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:30:41.823 00:30:41.823 real 0m15.921s 00:30:41.823 user 0m3.626s 00:30:41.823 sys 0m7.801s 00:30:41.823 21:31:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:41.823 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:30:41.823 ************************************ 00:30:41.823 END TEST nvmf_identify_kernel_target 00:30:41.823 ************************************ 00:30:41.823 21:31:36 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:41.823 21:31:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:41.823 21:31:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:41.823 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:30:42.081 ************************************ 00:30:42.081 START TEST nvmf_auth 00:30:42.081 ************************************ 00:30:42.081 21:31:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:42.081 * Looking for test storage... 00:30:42.081 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:30:42.081 21:31:36 -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.081 21:31:36 -- nvmf/common.sh@7 -- # uname -s 00:30:42.081 21:31:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.081 21:31:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.081 21:31:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.081 21:31:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.081 21:31:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.082 21:31:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.082 21:31:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.082 21:31:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.082 21:31:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.082 21:31:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.082 21:31:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:42.082 21:31:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:30:42.082 21:31:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.082 21:31:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.082 21:31:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:42.082 21:31:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.082 21:31:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:42.082 21:31:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.082 21:31:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.082 21:31:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.082 21:31:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.082 21:31:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.082 21:31:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.082 21:31:36 -- paths/export.sh@5 -- # export PATH 00:30:42.082 21:31:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.082 21:31:36 -- nvmf/common.sh@47 -- # : 0 00:30:42.082 21:31:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:42.082 21:31:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:42.082 21:31:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.082 21:31:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.082 21:31:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.082 21:31:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:42.082 21:31:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:42.082 21:31:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:42.082 21:31:36 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:42.082 21:31:36 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:42.082 21:31:36 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:42.082 21:31:36 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:42.082 21:31:36 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:42.082 21:31:36 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:42.082 21:31:36 -- host/auth.sh@21 -- # keys=() 00:30:42.082 21:31:36 -- host/auth.sh@77 -- # nvmftestinit 00:30:42.082 21:31:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:42.082 21:31:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.082 21:31:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:42.082 21:31:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:42.082 21:31:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:42.082 21:31:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.082 21:31:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.082 21:31:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.082 21:31:36 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:30:42.082 21:31:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:42.082 21:31:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:42.082 21:31:36 -- common/autotest_common.sh@10 -- # set +x 00:30:47.360 21:31:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:47.360 21:31:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:47.360 21:31:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:47.360 21:31:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:47.360 21:31:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:47.360 21:31:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:47.360 21:31:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:47.360 21:31:41 -- nvmf/common.sh@295 -- # net_devs=() 00:30:47.360 21:31:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:47.360 21:31:41 -- nvmf/common.sh@296 -- # e810=() 00:30:47.360 21:31:41 -- nvmf/common.sh@296 -- # local -ga e810 00:30:47.360 21:31:41 -- nvmf/common.sh@297 -- # x722=() 00:30:47.360 21:31:41 -- nvmf/common.sh@297 -- # local -ga x722 00:30:47.360 21:31:41 -- nvmf/common.sh@298 -- # mlx=() 00:30:47.360 21:31:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:47.360 21:31:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.360 21:31:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:47.360 21:31:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:47.360 21:31:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:47.360 21:31:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:47.360 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:47.360 21:31:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:47.360 21:31:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:47.360 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:47.360 21:31:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:47.360 21:31:41 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:30:47.360 21:31:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:47.360 21:31:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.361 21:31:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:47.361 21:31:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.361 21:31:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:47.361 Found net devices under 0000:27:00.0: cvl_0_0 00:30:47.361 21:31:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.361 21:31:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:47.361 21:31:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.361 21:31:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:47.361 21:31:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.361 21:31:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:47.361 Found net devices under 0000:27:00.1: cvl_0_1 00:30:47.361 21:31:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.361 21:31:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:47.361 21:31:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:47.361 21:31:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:47.361 21:31:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:47.361 21:31:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:47.361 21:31:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.361 21:31:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.361 21:31:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.361 21:31:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:47.361 21:31:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.361 21:31:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.361 21:31:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:47.361 21:31:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.361 21:31:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.361 21:31:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:47.361 21:31:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:47.361 21:31:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.361 21:31:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.361 21:31:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.361 21:31:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.361 21:31:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:47.361 21:31:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.361 21:31:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.361 21:31:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.361 21:31:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:47.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:30:47.361 00:30:47.361 --- 10.0.0.2 ping statistics --- 00:30:47.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.361 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:30:47.361 21:31:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:30:47.361 00:30:47.361 --- 10.0.0.1 ping statistics --- 00:30:47.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.361 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:30:47.361 21:31:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.361 21:31:41 -- nvmf/common.sh@411 -- # return 0 00:30:47.361 21:31:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:47.361 21:31:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.361 21:31:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:47.361 21:31:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:47.361 21:31:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.361 21:31:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:47.361 21:31:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:47.361 21:31:41 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:30:47.361 21:31:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:47.361 21:31:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:47.361 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:30:47.361 21:31:41 -- nvmf/common.sh@470 -- # nvmfpid=1637164 00:30:47.361 21:31:41 -- nvmf/common.sh@471 -- # waitforlisten 1637164 00:30:47.361 21:31:41 -- common/autotest_common.sh@817 -- # '[' -z 1637164 ']' 00:30:47.361 21:31:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.361 21:31:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:47.361 21:31:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.361 21:31:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:47.361 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:30:47.361 21:31:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:48.300 21:31:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.300 21:31:42 -- common/autotest_common.sh@850 -- # return 0 00:30:48.300 21:31:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:48.300 21:31:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:48.300 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.300 21:31:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.300 21:31:42 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:48.300 21:31:42 -- host/auth.sh@81 -- # gen_key null 32 00:30:48.300 21:31:42 -- host/auth.sh@53 -- # local digest len file key 00:30:48.300 21:31:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.300 21:31:42 -- host/auth.sh@54 -- # local -A digests 00:30:48.300 21:31:42 -- host/auth.sh@56 -- # digest=null 00:30:48.300 21:31:42 -- host/auth.sh@56 -- # len=32 00:30:48.300 21:31:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.300 21:31:42 -- host/auth.sh@57 -- # key=2c0bd68494c4eef66158a1228790c6ea 00:30:48.300 21:31:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:48.300 21:31:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.fGX 00:30:48.300 21:31:42 -- host/auth.sh@59 -- # format_dhchap_key 2c0bd68494c4eef66158a1228790c6ea 0 00:30:48.300 21:31:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 2c0bd68494c4eef66158a1228790c6ea 0 00:30:48.300 21:31:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:48.300 21:31:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:48.300 21:31:42 -- nvmf/common.sh@693 -- # key=2c0bd68494c4eef66158a1228790c6ea 00:30:48.300 21:31:42 -- nvmf/common.sh@693 -- # digest=0 00:30:48.300 21:31:42 -- nvmf/common.sh@694 -- # python - 00:30:48.300 21:31:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.fGX 00:30:48.300 21:31:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.fGX 00:30:48.300 21:31:42 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.fGX 00:30:48.300 21:31:42 -- host/auth.sh@82 -- # gen_key null 48 00:30:48.300 21:31:42 -- host/auth.sh@53 -- # local digest len file key 00:30:48.300 21:31:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.300 21:31:42 -- host/auth.sh@54 -- # local -A digests 00:30:48.300 21:31:42 -- host/auth.sh@56 -- # digest=null 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # len=48 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # key=20391cd92e4dc1c1c289282ef8de4cd8dc24396bea53f988 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.VwQ 00:30:48.301 21:31:42 -- host/auth.sh@59 -- # format_dhchap_key 20391cd92e4dc1c1c289282ef8de4cd8dc24396bea53f988 0 00:30:48.301 21:31:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 20391cd92e4dc1c1c289282ef8de4cd8dc24396bea53f988 0 00:30:48.301 21:31:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # key=20391cd92e4dc1c1c289282ef8de4cd8dc24396bea53f988 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # digest=0 00:30:48.301 21:31:42 -- nvmf/common.sh@694 -- # python - 00:30:48.301 21:31:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.VwQ 00:30:48.301 21:31:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.VwQ 00:30:48.301 21:31:42 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.VwQ 00:30:48.301 21:31:42 -- host/auth.sh@83 -- # gen_key sha256 32 00:30:48.301 21:31:42 -- host/auth.sh@53 -- # local digest len file key 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # local -A digests 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # digest=sha256 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # len=32 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # key=dc96b5a96ea9c7fc776dec6496d67319 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.uQl 00:30:48.301 21:31:42 -- host/auth.sh@59 -- # format_dhchap_key dc96b5a96ea9c7fc776dec6496d67319 1 00:30:48.301 21:31:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 dc96b5a96ea9c7fc776dec6496d67319 1 00:30:48.301 21:31:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # key=dc96b5a96ea9c7fc776dec6496d67319 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # digest=1 00:30:48.301 21:31:42 -- nvmf/common.sh@694 -- # python - 00:30:48.301 21:31:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.uQl 00:30:48.301 21:31:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.uQl 00:30:48.301 21:31:42 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.uQl 00:30:48.301 21:31:42 -- host/auth.sh@84 -- # gen_key sha384 48 00:30:48.301 21:31:42 -- host/auth.sh@53 -- # local digest len file key 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # local -A digests 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # digest=sha384 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # len=48 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # key=216e484d642ad6058e15f0fea175136b285c10857ec7e88b 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.KEp 00:30:48.301 21:31:42 -- host/auth.sh@59 -- # format_dhchap_key 216e484d642ad6058e15f0fea175136b285c10857ec7e88b 2 00:30:48.301 21:31:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 216e484d642ad6058e15f0fea175136b285c10857ec7e88b 2 00:30:48.301 21:31:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # key=216e484d642ad6058e15f0fea175136b285c10857ec7e88b 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # digest=2 00:30:48.301 21:31:42 -- nvmf/common.sh@694 -- # python - 00:30:48.301 21:31:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.KEp 00:30:48.301 21:31:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.KEp 00:30:48.301 21:31:42 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.KEp 00:30:48.301 21:31:42 -- host/auth.sh@85 -- # gen_key sha512 64 00:30:48.301 21:31:42 -- host/auth.sh@53 -- # local digest len file key 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:48.301 21:31:42 -- host/auth.sh@54 -- # local -A digests 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # digest=sha512 00:30:48.301 21:31:42 -- host/auth.sh@56 -- # len=64 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:48.301 21:31:42 -- host/auth.sh@57 -- # key=17bbbab45f122ca6a826f96aabdf93e3f0e5575f7874c065b77023c6607089dd 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:30:48.301 21:31:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.oHd 00:30:48.301 21:31:42 -- host/auth.sh@59 -- # format_dhchap_key 17bbbab45f122ca6a826f96aabdf93e3f0e5575f7874c065b77023c6607089dd 3 00:30:48.301 21:31:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 17bbbab45f122ca6a826f96aabdf93e3f0e5575f7874c065b77023c6607089dd 3 00:30:48.301 21:31:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # key=17bbbab45f122ca6a826f96aabdf93e3f0e5575f7874c065b77023c6607089dd 00:30:48.301 21:31:42 -- nvmf/common.sh@693 -- # digest=3 00:30:48.301 21:31:42 -- nvmf/common.sh@694 -- # python - 00:30:48.560 21:31:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.oHd 00:30:48.560 21:31:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.oHd 00:30:48.560 21:31:42 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.oHd 00:30:48.560 21:31:42 -- host/auth.sh@87 -- # waitforlisten 1637164 00:30:48.560 21:31:42 -- common/autotest_common.sh@817 -- # '[' -z 1637164 ']' 00:30:48.560 21:31:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.560 21:31:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.560 21:31:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.560 21:31:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.560 21:31:42 -- common/autotest_common.sh@850 -- # return 0 00:30:48.560 21:31:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:48.560 21:31:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fGX 00:30:48.560 21:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.560 21:31:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:48.560 21:31:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VwQ 00:30:48.560 21:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.560 21:31:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:48.560 21:31:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uQl 00:30:48.560 21:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.560 21:31:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:48.560 21:31:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.KEp 00:30:48.560 21:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.560 21:31:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:30:48.560 21:31:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oHd 00:30:48.560 21:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:48.560 21:31:42 -- common/autotest_common.sh@10 -- # set +x 00:30:48.560 21:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:48.560 21:31:42 -- host/auth.sh@92 -- # nvmet_auth_init 00:30:48.560 21:31:42 -- host/auth.sh@35 -- # get_main_ns_ip 00:30:48.560 21:31:42 -- nvmf/common.sh@717 -- # local ip 00:30:48.560 21:31:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:48.560 21:31:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:48.560 21:31:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:48.560 21:31:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:48.560 21:31:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:48.560 21:31:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:48.560 21:31:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:48.560 21:31:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:48.560 21:31:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:48.560 21:31:42 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:48.560 21:31:42 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:48.560 21:31:42 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:48.560 21:31:42 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:48.560 21:31:42 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:48.560 21:31:42 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:48.560 21:31:42 -- nvmf/common.sh@628 -- # local block nvme 00:30:48.560 21:31:42 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:48.560 21:31:42 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:48.560 21:31:42 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:48.560 21:31:42 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:30:51.089 Waiting for block devices as requested 00:30:51.089 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:30:51.352 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.352 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.352 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.352 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:30:51.352 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.613 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:30:51.613 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.613 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:30:51.874 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:51.874 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:30:51.874 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:30:51.874 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:30:52.135 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:52.135 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:30:52.135 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:30:52.135 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:30:52.394 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:30:52.965 21:31:47 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:52.965 21:31:47 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:52.965 21:31:47 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:52.965 21:31:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:52.965 21:31:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:52.965 21:31:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:52.965 21:31:47 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:52.965 21:31:47 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:52.965 21:31:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:52.965 No valid GPT data, bailing 00:30:52.965 21:31:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:52.965 21:31:47 -- scripts/common.sh@391 -- # pt= 00:30:52.965 21:31:47 -- scripts/common.sh@392 -- # return 1 00:30:52.965 21:31:47 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:52.965 21:31:47 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:52.966 21:31:47 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:52.966 21:31:47 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:30:52.966 21:31:47 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:52.966 21:31:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:52.966 21:31:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:52.966 21:31:47 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:30:52.966 21:31:47 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:52.966 21:31:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:30:52.966 No valid GPT data, bailing 00:30:52.966 21:31:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:52.966 21:31:47 -- scripts/common.sh@391 -- # pt= 00:30:52.966 21:31:47 -- scripts/common.sh@392 -- # return 1 00:30:52.966 21:31:47 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:30:52.966 21:31:47 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:30:52.966 21:31:47 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:52.966 21:31:47 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:53.227 21:31:47 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:53.227 21:31:47 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:53.227 21:31:47 -- nvmf/common.sh@656 -- # echo 1 00:30:53.227 21:31:47 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:30:53.227 21:31:47 -- nvmf/common.sh@658 -- # echo 1 00:30:53.227 21:31:47 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:53.227 21:31:47 -- nvmf/common.sh@661 -- # echo tcp 00:30:53.227 21:31:47 -- nvmf/common.sh@662 -- # echo 4420 00:30:53.227 21:31:47 -- nvmf/common.sh@663 -- # echo ipv4 00:30:53.227 21:31:47 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:53.227 21:31:47 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:30:53.227 00:30:53.227 Discovery Log Number of Records 2, Generation counter 2 00:30:53.227 =====Discovery Log Entry 0====== 00:30:53.227 trtype: tcp 00:30:53.227 adrfam: ipv4 00:30:53.227 subtype: current discovery subsystem 00:30:53.227 treq: not specified, sq flow control disable supported 00:30:53.227 portid: 1 00:30:53.227 trsvcid: 4420 00:30:53.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:53.227 traddr: 10.0.0.1 00:30:53.227 eflags: none 00:30:53.227 sectype: none 00:30:53.227 =====Discovery Log Entry 1====== 00:30:53.227 trtype: tcp 00:30:53.227 adrfam: ipv4 00:30:53.227 subtype: nvme subsystem 00:30:53.227 treq: not specified, sq flow control disable supported 00:30:53.227 portid: 1 00:30:53.227 trsvcid: 4420 00:30:53.227 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:53.227 traddr: 10.0.0.1 00:30:53.227 eflags: none 00:30:53.227 sectype: none 00:30:53.227 21:31:47 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:53.227 21:31:47 -- host/auth.sh@37 -- # echo 0 00:30:53.227 21:31:47 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:53.227 21:31:47 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:53.227 21:31:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.227 21:31:47 -- host/auth.sh@44 -- # digest=sha256 00:30:53.227 21:31:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.227 21:31:47 -- host/auth.sh@44 -- # keyid=1 00:30:53.227 21:31:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:53.227 21:31:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:53.227 21:31:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:53.227 21:31:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:53.227 21:31:47 -- host/auth.sh@100 -- # IFS=, 00:30:53.227 21:31:47 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:30:53.227 21:31:47 -- host/auth.sh@100 -- # IFS=, 00:30:53.227 21:31:47 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:53.227 21:31:47 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:53.227 21:31:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.227 21:31:47 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:30:53.227 21:31:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:53.227 21:31:47 -- host/auth.sh@68 -- # keyid=1 00:30:53.227 21:31:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:53.227 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.227 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.227 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.227 21:31:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.227 21:31:47 -- nvmf/common.sh@717 -- # local ip 00:30:53.227 21:31:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.228 21:31:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.228 21:31:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.228 21:31:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:53.228 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.228 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.228 nvme0n1 00:30:53.228 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.228 21:31:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.228 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.228 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.228 21:31:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.228 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.228 21:31:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.228 21:31:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.228 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.228 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.228 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.228 21:31:47 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:30:53.228 21:31:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:53.228 21:31:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.228 21:31:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:53.228 21:31:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.228 21:31:47 -- host/auth.sh@44 -- # digest=sha256 00:30:53.228 21:31:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.228 21:31:47 -- host/auth.sh@44 -- # keyid=0 00:30:53.228 21:31:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:53.228 21:31:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:53.228 21:31:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:53.228 21:31:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:53.228 21:31:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:30:53.228 21:31:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.228 21:31:47 -- host/auth.sh@68 -- # digest=sha256 00:30:53.228 21:31:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:53.228 21:31:47 -- host/auth.sh@68 -- # keyid=0 00:30:53.228 21:31:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:53.228 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.228 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.228 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.228 21:31:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.228 21:31:47 -- nvmf/common.sh@717 -- # local ip 00:30:53.228 21:31:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.228 21:31:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.228 21:31:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.228 21:31:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.228 21:31:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.228 21:31:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:53.228 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.228 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.489 nvme0n1 00:30:53.489 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.489 21:31:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.489 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.489 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.489 21:31:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.489 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.489 21:31:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.489 21:31:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.489 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.489 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.490 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.490 21:31:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.490 21:31:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:53.490 21:31:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.490 21:31:47 -- host/auth.sh@44 -- # digest=sha256 00:30:53.490 21:31:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.490 21:31:47 -- host/auth.sh@44 -- # keyid=1 00:30:53.490 21:31:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:53.490 21:31:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:53.490 21:31:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:53.490 21:31:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:53.490 21:31:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:30:53.490 21:31:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.490 21:31:47 -- host/auth.sh@68 -- # digest=sha256 00:30:53.490 21:31:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:53.490 21:31:47 -- host/auth.sh@68 -- # keyid=1 00:30:53.490 21:31:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:53.490 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.490 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.490 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.490 21:31:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.490 21:31:47 -- nvmf/common.sh@717 -- # local ip 00:30:53.490 21:31:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.490 21:31:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.490 21:31:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.490 21:31:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.490 21:31:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.490 21:31:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.490 21:31:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.490 21:31:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.490 21:31:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.490 21:31:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:53.490 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.490 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.490 nvme0n1 00:30:53.490 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.490 21:31:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.490 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.490 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.490 21:31:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.490 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.748 21:31:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.748 21:31:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.748 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.748 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.748 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.748 21:31:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.748 21:31:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:53.748 21:31:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.748 21:31:47 -- host/auth.sh@44 -- # digest=sha256 00:30:53.748 21:31:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.748 21:31:47 -- host/auth.sh@44 -- # keyid=2 00:30:53.748 21:31:47 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:53.748 21:31:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:53.748 21:31:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:53.748 21:31:47 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:53.748 21:31:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:30:53.748 21:31:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.748 21:31:47 -- host/auth.sh@68 -- # digest=sha256 00:30:53.748 21:31:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:53.748 21:31:47 -- host/auth.sh@68 -- # keyid=2 00:30:53.748 21:31:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:53.748 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.748 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.748 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.748 21:31:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.748 21:31:47 -- nvmf/common.sh@717 -- # local ip 00:30:53.748 21:31:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.748 21:31:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.748 21:31:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.748 21:31:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.748 21:31:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.748 21:31:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.748 21:31:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.749 21:31:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.749 21:31:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.749 21:31:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:53.749 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.749 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.749 nvme0n1 00:30:53.749 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.749 21:31:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.749 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.749 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.749 21:31:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:53.749 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.749 21:31:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.749 21:31:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.749 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.749 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.749 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.749 21:31:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:53.749 21:31:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:53.749 21:31:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:53.749 21:31:47 -- host/auth.sh@44 -- # digest=sha256 00:30:53.749 21:31:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:53.749 21:31:47 -- host/auth.sh@44 -- # keyid=3 00:30:53.749 21:31:47 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:53.749 21:31:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:53.749 21:31:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:53.749 21:31:47 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:53.749 21:31:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:30:53.749 21:31:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:53.749 21:31:47 -- host/auth.sh@68 -- # digest=sha256 00:30:53.749 21:31:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:53.749 21:31:47 -- host/auth.sh@68 -- # keyid=3 00:30:53.749 21:31:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:53.749 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.749 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:53.749 21:31:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:53.749 21:31:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:53.749 21:31:47 -- nvmf/common.sh@717 -- # local ip 00:30:53.749 21:31:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:53.749 21:31:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:53.749 21:31:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.749 21:31:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.749 21:31:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:53.749 21:31:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.749 21:31:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:53.749 21:31:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:53.749 21:31:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:53.749 21:31:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:53.749 21:31:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:53.749 21:31:47 -- common/autotest_common.sh@10 -- # set +x 00:30:54.007 nvme0n1 00:30:54.007 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.007 21:31:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.007 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.007 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.007 21:31:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.007 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.007 21:31:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.007 21:31:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.007 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.007 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.007 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.007 21:31:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.007 21:31:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:54.007 21:31:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.007 21:31:48 -- host/auth.sh@44 -- # digest=sha256 00:30:54.007 21:31:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:54.007 21:31:48 -- host/auth.sh@44 -- # keyid=4 00:30:54.007 21:31:48 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:54.007 21:31:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:54.007 21:31:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:30:54.007 21:31:48 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:54.007 21:31:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:30:54.008 21:31:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.008 21:31:48 -- host/auth.sh@68 -- # digest=sha256 00:30:54.008 21:31:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:30:54.008 21:31:48 -- host/auth.sh@68 -- # keyid=4 00:30:54.008 21:31:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:54.008 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.008 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.008 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.008 21:31:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.008 21:31:48 -- nvmf/common.sh@717 -- # local ip 00:30:54.008 21:31:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.008 21:31:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.008 21:31:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.008 21:31:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.008 21:31:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.008 21:31:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.008 21:31:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.008 21:31:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.008 21:31:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.008 21:31:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:54.008 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.008 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.008 nvme0n1 00:30:54.008 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.008 21:31:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.008 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.008 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.008 21:31:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.008 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.008 21:31:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.008 21:31:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.008 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.008 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.268 21:31:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.268 21:31:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:54.268 21:31:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # digest=sha256 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # keyid=0 00:30:54.268 21:31:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:54.268 21:31:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:54.268 21:31:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:54.268 21:31:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:30:54.268 21:31:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # digest=sha256 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # keyid=0 00:30:54.268 21:31:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.268 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.268 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.268 21:31:48 -- nvmf/common.sh@717 -- # local ip 00:30:54.268 21:31:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.268 21:31:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.268 21:31:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.268 21:31:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.268 21:31:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.268 21:31:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.268 21:31:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.268 21:31:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.268 21:31:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.268 21:31:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:54.268 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.268 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 nvme0n1 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.268 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.268 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 21:31:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.268 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.268 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.268 21:31:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:54.268 21:31:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # digest=sha256 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@44 -- # keyid=1 00:30:54.268 21:31:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:54.268 21:31:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:54.268 21:31:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:54.268 21:31:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:30:54.268 21:31:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # digest=sha256 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:54.268 21:31:48 -- host/auth.sh@68 -- # keyid=1 00:30:54.268 21:31:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.268 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.268 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.268 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.268 21:31:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.268 21:31:48 -- nvmf/common.sh@717 -- # local ip 00:30:54.268 21:31:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.268 21:31:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.269 21:31:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.269 21:31:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.269 21:31:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.269 21:31:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.269 21:31:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.269 21:31:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.269 21:31:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.269 21:31:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:54.269 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.269 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.528 nvme0n1 00:30:54.528 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.528 21:31:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.529 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.529 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.529 21:31:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.529 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.529 21:31:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.529 21:31:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.529 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.529 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.529 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.529 21:31:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.529 21:31:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:54.529 21:31:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.529 21:31:48 -- host/auth.sh@44 -- # digest=sha256 00:30:54.529 21:31:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.529 21:31:48 -- host/auth.sh@44 -- # keyid=2 00:30:54.529 21:31:48 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:54.529 21:31:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:54.529 21:31:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:54.529 21:31:48 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:54.529 21:31:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:30:54.529 21:31:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.529 21:31:48 -- host/auth.sh@68 -- # digest=sha256 00:30:54.529 21:31:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:54.529 21:31:48 -- host/auth.sh@68 -- # keyid=2 00:30:54.529 21:31:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.529 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.529 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.529 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.529 21:31:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.529 21:31:48 -- nvmf/common.sh@717 -- # local ip 00:30:54.529 21:31:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.529 21:31:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.529 21:31:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.529 21:31:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.529 21:31:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.529 21:31:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.529 21:31:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.529 21:31:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.529 21:31:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.529 21:31:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:54.529 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.529 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 nvme0n1 00:30:54.789 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.789 21:31:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.789 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.789 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 21:31:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.789 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.789 21:31:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.789 21:31:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.789 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.789 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.789 21:31:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:54.789 21:31:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:54.789 21:31:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:54.789 21:31:48 -- host/auth.sh@44 -- # digest=sha256 00:30:54.789 21:31:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.789 21:31:48 -- host/auth.sh@44 -- # keyid=3 00:30:54.789 21:31:48 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:54.789 21:31:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:54.789 21:31:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:54.789 21:31:48 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:54.789 21:31:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:30:54.789 21:31:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:54.789 21:31:48 -- host/auth.sh@68 -- # digest=sha256 00:30:54.789 21:31:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:54.789 21:31:48 -- host/auth.sh@68 -- # keyid=3 00:30:54.789 21:31:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.789 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.789 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 21:31:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.789 21:31:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:54.789 21:31:48 -- nvmf/common.sh@717 -- # local ip 00:30:54.789 21:31:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:54.789 21:31:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:54.789 21:31:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.789 21:31:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.789 21:31:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:54.789 21:31:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.789 21:31:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:54.789 21:31:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:54.789 21:31:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:54.789 21:31:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:54.789 21:31:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.789 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 nvme0n1 00:30:54.789 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.789 21:31:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.789 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.789 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.789 21:31:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:54.789 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.048 21:31:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.048 21:31:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.048 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.048 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.048 21:31:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.048 21:31:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:55.048 21:31:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.048 21:31:49 -- host/auth.sh@44 -- # digest=sha256 00:30:55.048 21:31:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:55.048 21:31:49 -- host/auth.sh@44 -- # keyid=4 00:30:55.048 21:31:49 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:55.048 21:31:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:55.048 21:31:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:30:55.048 21:31:49 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:55.048 21:31:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:30:55.048 21:31:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.048 21:31:49 -- host/auth.sh@68 -- # digest=sha256 00:30:55.048 21:31:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:30:55.048 21:31:49 -- host/auth.sh@68 -- # keyid=4 00:30:55.048 21:31:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:55.048 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.048 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.048 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.048 21:31:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.048 21:31:49 -- nvmf/common.sh@717 -- # local ip 00:30:55.048 21:31:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.049 21:31:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.049 21:31:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.049 21:31:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.049 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.049 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.049 nvme0n1 00:30:55.049 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.049 21:31:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.049 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.049 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.049 21:31:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:55.049 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.049 21:31:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.049 21:31:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.049 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.049 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.049 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.049 21:31:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.049 21:31:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.049 21:31:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:55.049 21:31:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.049 21:31:49 -- host/auth.sh@44 -- # digest=sha256 00:30:55.049 21:31:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.049 21:31:49 -- host/auth.sh@44 -- # keyid=0 00:30:55.049 21:31:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:55.049 21:31:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:55.049 21:31:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:55.049 21:31:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:55.049 21:31:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:30:55.049 21:31:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.049 21:31:49 -- host/auth.sh@68 -- # digest=sha256 00:30:55.049 21:31:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:55.049 21:31:49 -- host/auth.sh@68 -- # keyid=0 00:30:55.049 21:31:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.049 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.049 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.049 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.049 21:31:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.049 21:31:49 -- nvmf/common.sh@717 -- # local ip 00:30:55.049 21:31:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.049 21:31:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.049 21:31:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.049 21:31:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.049 21:31:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.049 21:31:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:55.049 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.049 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.307 nvme0n1 00:30:55.308 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.308 21:31:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.308 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.308 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.308 21:31:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:55.308 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.308 21:31:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.308 21:31:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.308 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.308 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.308 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.308 21:31:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.308 21:31:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:55.308 21:31:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.308 21:31:49 -- host/auth.sh@44 -- # digest=sha256 00:30:55.308 21:31:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.308 21:31:49 -- host/auth.sh@44 -- # keyid=1 00:30:55.308 21:31:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:55.308 21:31:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:55.308 21:31:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:55.308 21:31:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:55.308 21:31:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:30:55.308 21:31:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.308 21:31:49 -- host/auth.sh@68 -- # digest=sha256 00:30:55.308 21:31:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:55.308 21:31:49 -- host/auth.sh@68 -- # keyid=1 00:30:55.308 21:31:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.308 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.308 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.308 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.308 21:31:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.308 21:31:49 -- nvmf/common.sh@717 -- # local ip 00:30:55.308 21:31:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.308 21:31:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.308 21:31:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.308 21:31:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.308 21:31:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.308 21:31:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.308 21:31:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.308 21:31:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.308 21:31:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.308 21:31:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:55.308 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.308 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.566 nvme0n1 00:30:55.566 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.566 21:31:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.566 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.566 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.566 21:31:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:55.566 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.566 21:31:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.566 21:31:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.566 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.566 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.566 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.566 21:31:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.566 21:31:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:55.566 21:31:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.566 21:31:49 -- host/auth.sh@44 -- # digest=sha256 00:30:55.566 21:31:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.566 21:31:49 -- host/auth.sh@44 -- # keyid=2 00:30:55.566 21:31:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:55.566 21:31:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:55.566 21:31:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:55.566 21:31:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:55.566 21:31:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:30:55.566 21:31:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.566 21:31:49 -- host/auth.sh@68 -- # digest=sha256 00:30:55.566 21:31:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:55.566 21:31:49 -- host/auth.sh@68 -- # keyid=2 00:30:55.566 21:31:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.566 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.566 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.566 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.566 21:31:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.566 21:31:49 -- nvmf/common.sh@717 -- # local ip 00:30:55.566 21:31:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.566 21:31:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.566 21:31:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.566 21:31:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.566 21:31:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.566 21:31:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.566 21:31:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.566 21:31:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.566 21:31:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.566 21:31:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:55.566 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.566 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.824 nvme0n1 00:30:55.824 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.824 21:31:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.824 21:31:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.824 21:31:49 -- common/autotest_common.sh@10 -- # set +x 00:30:55.824 21:31:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:55.824 21:31:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.824 21:31:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.824 21:31:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.824 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.824 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:55.824 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.824 21:31:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:55.824 21:31:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:55.824 21:31:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:55.824 21:31:50 -- host/auth.sh@44 -- # digest=sha256 00:30:55.824 21:31:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.824 21:31:50 -- host/auth.sh@44 -- # keyid=3 00:30:55.824 21:31:50 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:55.824 21:31:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:55.824 21:31:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:55.824 21:31:50 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:55.824 21:31:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:30:55.824 21:31:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:55.824 21:31:50 -- host/auth.sh@68 -- # digest=sha256 00:30:55.824 21:31:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:55.824 21:31:50 -- host/auth.sh@68 -- # keyid=3 00:30:55.824 21:31:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.824 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.824 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:55.824 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:55.824 21:31:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:55.824 21:31:50 -- nvmf/common.sh@717 -- # local ip 00:30:55.824 21:31:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:55.824 21:31:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:55.824 21:31:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.824 21:31:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.824 21:31:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:55.824 21:31:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.824 21:31:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:55.824 21:31:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:55.824 21:31:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:55.824 21:31:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:55.824 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:55.824 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.082 nvme0n1 00:30:56.082 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.082 21:31:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.082 21:31:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:56.082 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.082 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.082 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.082 21:31:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.082 21:31:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.082 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.082 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.082 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.082 21:31:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:56.082 21:31:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:56.082 21:31:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:56.082 21:31:50 -- host/auth.sh@44 -- # digest=sha256 00:30:56.082 21:31:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:56.082 21:31:50 -- host/auth.sh@44 -- # keyid=4 00:30:56.082 21:31:50 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:56.082 21:31:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:56.082 21:31:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:30:56.082 21:31:50 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:56.082 21:31:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:30:56.082 21:31:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:56.082 21:31:50 -- host/auth.sh@68 -- # digest=sha256 00:30:56.082 21:31:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:30:56.082 21:31:50 -- host/auth.sh@68 -- # keyid=4 00:30:56.082 21:31:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:56.082 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.082 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.082 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.082 21:31:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:56.082 21:31:50 -- nvmf/common.sh@717 -- # local ip 00:30:56.082 21:31:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:56.082 21:31:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:56.082 21:31:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.082 21:31:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.082 21:31:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:56.082 21:31:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.082 21:31:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:56.082 21:31:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:56.082 21:31:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:56.082 21:31:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:56.082 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.082 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.342 nvme0n1 00:30:56.342 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.342 21:31:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:56.342 21:31:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.342 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.342 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.342 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.342 21:31:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.342 21:31:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.342 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.342 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.342 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.342 21:31:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:56.342 21:31:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:56.342 21:31:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:56.342 21:31:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:56.342 21:31:50 -- host/auth.sh@44 -- # digest=sha256 00:30:56.342 21:31:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:56.342 21:31:50 -- host/auth.sh@44 -- # keyid=0 00:30:56.342 21:31:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:56.342 21:31:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:56.342 21:31:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:56.342 21:31:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:56.342 21:31:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:30:56.342 21:31:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:56.342 21:31:50 -- host/auth.sh@68 -- # digest=sha256 00:30:56.342 21:31:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:56.342 21:31:50 -- host/auth.sh@68 -- # keyid=0 00:30:56.342 21:31:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:56.342 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.342 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.342 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.342 21:31:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:56.342 21:31:50 -- nvmf/common.sh@717 -- # local ip 00:30:56.342 21:31:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:56.342 21:31:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:56.342 21:31:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.342 21:31:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.342 21:31:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:56.342 21:31:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.342 21:31:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:56.342 21:31:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:56.342 21:31:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:56.342 21:31:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:56.342 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.342 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.913 nvme0n1 00:30:56.913 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.913 21:31:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.913 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.913 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.913 21:31:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:56.913 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.913 21:31:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.913 21:31:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.913 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.913 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.913 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.913 21:31:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:56.913 21:31:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:56.913 21:31:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:56.913 21:31:50 -- host/auth.sh@44 -- # digest=sha256 00:30:56.913 21:31:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:56.913 21:31:50 -- host/auth.sh@44 -- # keyid=1 00:30:56.913 21:31:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:56.913 21:31:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:56.913 21:31:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:56.913 21:31:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:56.913 21:31:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:30:56.913 21:31:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:56.913 21:31:50 -- host/auth.sh@68 -- # digest=sha256 00:30:56.913 21:31:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:56.913 21:31:50 -- host/auth.sh@68 -- # keyid=1 00:30:56.913 21:31:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:56.913 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.913 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:56.913 21:31:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.913 21:31:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:56.913 21:31:50 -- nvmf/common.sh@717 -- # local ip 00:30:56.913 21:31:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:56.913 21:31:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:56.913 21:31:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.913 21:31:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.913 21:31:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:56.913 21:31:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.913 21:31:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:56.913 21:31:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:56.913 21:31:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:56.913 21:31:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:56.913 21:31:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.913 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 nvme0n1 00:30:57.172 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.172 21:31:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.172 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.172 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 21:31:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:57.172 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.172 21:31:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.172 21:31:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.172 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.172 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.172 21:31:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:57.172 21:31:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:57.172 21:31:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:57.172 21:31:51 -- host/auth.sh@44 -- # digest=sha256 00:30:57.172 21:31:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.172 21:31:51 -- host/auth.sh@44 -- # keyid=2 00:30:57.172 21:31:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:57.172 21:31:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:57.172 21:31:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:57.172 21:31:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:57.172 21:31:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:30:57.172 21:31:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:57.172 21:31:51 -- host/auth.sh@68 -- # digest=sha256 00:30:57.172 21:31:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:57.172 21:31:51 -- host/auth.sh@68 -- # keyid=2 00:30:57.172 21:31:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:57.172 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.172 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.172 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.172 21:31:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:57.172 21:31:51 -- nvmf/common.sh@717 -- # local ip 00:30:57.172 21:31:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:57.172 21:31:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:57.172 21:31:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.172 21:31:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.172 21:31:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:57.172 21:31:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.172 21:31:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:57.172 21:31:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:57.172 21:31:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:57.172 21:31:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:57.172 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.172 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.738 nvme0n1 00:30:57.738 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.738 21:31:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.738 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.738 21:31:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:57.738 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.738 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.738 21:31:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.738 21:31:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.738 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.738 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.738 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.738 21:31:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:57.738 21:31:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:57.738 21:31:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:57.738 21:31:51 -- host/auth.sh@44 -- # digest=sha256 00:30:57.738 21:31:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.738 21:31:51 -- host/auth.sh@44 -- # keyid=3 00:30:57.738 21:31:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:57.738 21:31:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:57.738 21:31:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:57.738 21:31:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:30:57.738 21:31:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:30:57.738 21:31:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:57.738 21:31:51 -- host/auth.sh@68 -- # digest=sha256 00:30:57.738 21:31:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:57.738 21:31:51 -- host/auth.sh@68 -- # keyid=3 00:30:57.738 21:31:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:57.738 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.738 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.738 21:31:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.738 21:31:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:57.738 21:31:51 -- nvmf/common.sh@717 -- # local ip 00:30:57.738 21:31:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:57.738 21:31:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:57.738 21:31:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.738 21:31:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.738 21:31:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:57.738 21:31:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.738 21:31:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:57.738 21:31:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:57.738 21:31:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:57.738 21:31:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:30:57.738 21:31:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.738 21:31:51 -- common/autotest_common.sh@10 -- # set +x 00:30:57.997 nvme0n1 00:30:57.997 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.997 21:31:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.997 21:31:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:57.997 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.997 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:57.997 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.997 21:31:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.997 21:31:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.997 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.997 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:57.997 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.997 21:31:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:57.997 21:31:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:57.997 21:31:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:57.997 21:31:52 -- host/auth.sh@44 -- # digest=sha256 00:30:57.997 21:31:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.997 21:31:52 -- host/auth.sh@44 -- # keyid=4 00:30:57.998 21:31:52 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:57.998 21:31:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:57.998 21:31:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:30:57.998 21:31:52 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:30:57.998 21:31:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:30:57.998 21:31:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:57.998 21:31:52 -- host/auth.sh@68 -- # digest=sha256 00:30:57.998 21:31:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:30:57.998 21:31:52 -- host/auth.sh@68 -- # keyid=4 00:30:57.998 21:31:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:57.998 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.998 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:57.998 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:57.998 21:31:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:57.998 21:31:52 -- nvmf/common.sh@717 -- # local ip 00:30:57.998 21:31:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:57.998 21:31:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:57.998 21:31:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.998 21:31:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.998 21:31:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:57.998 21:31:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.998 21:31:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:57.998 21:31:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:57.998 21:31:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:57.998 21:31:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.998 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:57.998 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:58.565 nvme0n1 00:30:58.565 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.565 21:31:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.565 21:31:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:58.565 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.565 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:58.565 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.565 21:31:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.565 21:31:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.565 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.565 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:58.565 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.565 21:31:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:30:58.565 21:31:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:58.565 21:31:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:58.565 21:31:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:58.565 21:31:52 -- host/auth.sh@44 -- # digest=sha256 00:30:58.565 21:31:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:58.565 21:31:52 -- host/auth.sh@44 -- # keyid=0 00:30:58.565 21:31:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:58.566 21:31:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:58.566 21:31:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:58.566 21:31:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:30:58.566 21:31:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:30:58.566 21:31:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:58.566 21:31:52 -- host/auth.sh@68 -- # digest=sha256 00:30:58.566 21:31:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:58.566 21:31:52 -- host/auth.sh@68 -- # keyid=0 00:30:58.566 21:31:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:58.566 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.566 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:58.566 21:31:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:58.566 21:31:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:58.566 21:31:52 -- nvmf/common.sh@717 -- # local ip 00:30:58.566 21:31:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:58.566 21:31:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:58.566 21:31:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.566 21:31:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.566 21:31:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:58.566 21:31:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.566 21:31:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:58.566 21:31:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:58.566 21:31:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:58.566 21:31:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:30:58.566 21:31:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:58.566 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:30:59.136 nvme0n1 00:30:59.136 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.136 21:31:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.136 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.136 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.136 21:31:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:59.136 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.136 21:31:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.136 21:31:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.136 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.136 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.136 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.136 21:31:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:59.136 21:31:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:59.136 21:31:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:59.136 21:31:53 -- host/auth.sh@44 -- # digest=sha256 00:30:59.136 21:31:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.136 21:31:53 -- host/auth.sh@44 -- # keyid=1 00:30:59.136 21:31:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:59.136 21:31:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:59.136 21:31:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:59.136 21:31:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:30:59.136 21:31:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:30:59.136 21:31:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:59.136 21:31:53 -- host/auth.sh@68 -- # digest=sha256 00:30:59.136 21:31:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:59.136 21:31:53 -- host/auth.sh@68 -- # keyid=1 00:30:59.136 21:31:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:59.136 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.136 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.136 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.136 21:31:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:59.136 21:31:53 -- nvmf/common.sh@717 -- # local ip 00:30:59.136 21:31:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:59.136 21:31:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:59.136 21:31:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.136 21:31:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.136 21:31:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:59.137 21:31:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.137 21:31:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:59.137 21:31:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:59.137 21:31:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:59.137 21:31:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:30:59.137 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.137 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.704 nvme0n1 00:30:59.704 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.704 21:31:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.704 21:31:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:30:59.704 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.704 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.704 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.704 21:31:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.704 21:31:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.704 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.704 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.704 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.704 21:31:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:30:59.704 21:31:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:59.704 21:31:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:30:59.704 21:31:53 -- host/auth.sh@44 -- # digest=sha256 00:30:59.704 21:31:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.704 21:31:53 -- host/auth.sh@44 -- # keyid=2 00:30:59.704 21:31:53 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:59.704 21:31:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:30:59.704 21:31:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:30:59.704 21:31:53 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:30:59.704 21:31:53 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:30:59.704 21:31:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:30:59.704 21:31:53 -- host/auth.sh@68 -- # digest=sha256 00:30:59.704 21:31:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:30:59.704 21:31:53 -- host/auth.sh@68 -- # keyid=2 00:30:59.704 21:31:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:59.704 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.704 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.704 21:31:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.704 21:31:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:30:59.704 21:31:53 -- nvmf/common.sh@717 -- # local ip 00:30:59.704 21:31:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:59.704 21:31:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:59.704 21:31:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.704 21:31:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.704 21:31:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:59.704 21:31:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.704 21:31:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:59.704 21:31:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:59.704 21:31:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:59.704 21:31:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:59.704 21:31:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.704 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:31:00.272 nvme0n1 00:31:00.272 21:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.272 21:31:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.272 21:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.272 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:31:00.272 21:31:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.272 21:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.272 21:31:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.272 21:31:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.272 21:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.272 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:31:00.272 21:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.272 21:31:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:00.272 21:31:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:00.272 21:31:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:00.272 21:31:54 -- host/auth.sh@44 -- # digest=sha256 00:31:00.272 21:31:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:00.272 21:31:54 -- host/auth.sh@44 -- # keyid=3 00:31:00.272 21:31:54 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:00.272 21:31:54 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:00.272 21:31:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:00.272 21:31:54 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:00.272 21:31:54 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:31:00.272 21:31:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:00.272 21:31:54 -- host/auth.sh@68 -- # digest=sha256 00:31:00.272 21:31:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:00.272 21:31:54 -- host/auth.sh@68 -- # keyid=3 00:31:00.272 21:31:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:00.272 21:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.272 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:31:00.272 21:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.272 21:31:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:00.272 21:31:54 -- nvmf/common.sh@717 -- # local ip 00:31:00.272 21:31:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:00.272 21:31:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:00.272 21:31:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.272 21:31:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.272 21:31:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:00.272 21:31:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.272 21:31:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:00.272 21:31:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:00.272 21:31:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:00.272 21:31:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:00.272 21:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.272 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:31:00.841 nvme0n1 00:31:00.841 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.841 21:31:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.841 21:31:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:00.841 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.841 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:00.841 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.841 21:31:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.841 21:31:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.841 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.841 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.102 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.102 21:31:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.102 21:31:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:01.102 21:31:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.102 21:31:55 -- host/auth.sh@44 -- # digest=sha256 00:31:01.102 21:31:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:01.102 21:31:55 -- host/auth.sh@44 -- # keyid=4 00:31:01.102 21:31:55 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:01.102 21:31:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:01.102 21:31:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:01.102 21:31:55 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:01.102 21:31:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:31:01.102 21:31:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.102 21:31:55 -- host/auth.sh@68 -- # digest=sha256 00:31:01.102 21:31:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:01.102 21:31:55 -- host/auth.sh@68 -- # keyid=4 00:31:01.102 21:31:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:01.102 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.102 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.102 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.102 21:31:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.102 21:31:55 -- nvmf/common.sh@717 -- # local ip 00:31:01.102 21:31:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.102 21:31:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.102 21:31:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.102 21:31:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.102 21:31:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.102 21:31:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.102 21:31:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.102 21:31:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.102 21:31:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.102 21:31:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:01.102 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.102 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 nvme0n1 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:01.673 21:31:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.673 21:31:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.673 21:31:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:01.673 21:31:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.673 21:31:55 -- host/auth.sh@44 -- # digest=sha384 00:31:01.673 21:31:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.673 21:31:55 -- host/auth.sh@44 -- # keyid=0 00:31:01.673 21:31:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:01.673 21:31:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:01.673 21:31:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:01.673 21:31:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:01.673 21:31:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:31:01.673 21:31:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.673 21:31:55 -- host/auth.sh@68 -- # digest=sha384 00:31:01.673 21:31:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:01.673 21:31:55 -- host/auth.sh@68 -- # keyid=0 00:31:01.673 21:31:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.673 21:31:55 -- nvmf/common.sh@717 -- # local ip 00:31:01.673 21:31:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.673 21:31:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.673 21:31:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.673 21:31:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.673 21:31:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.673 21:31:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.673 21:31:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.673 21:31:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.673 21:31:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.673 21:31:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 nvme0n1 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.673 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.673 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.673 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.673 21:31:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.673 21:31:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:01.674 21:31:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.674 21:31:55 -- host/auth.sh@44 -- # digest=sha384 00:31:01.674 21:31:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.674 21:31:55 -- host/auth.sh@44 -- # keyid=1 00:31:01.674 21:31:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:01.674 21:31:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:01.674 21:31:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:01.674 21:31:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:01.674 21:31:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:31:01.674 21:31:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.674 21:31:55 -- host/auth.sh@68 -- # digest=sha384 00:31:01.674 21:31:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:01.674 21:31:55 -- host/auth.sh@68 -- # keyid=1 00:31:01.674 21:31:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.674 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.674 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.674 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.674 21:31:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.674 21:31:55 -- nvmf/common.sh@717 -- # local ip 00:31:01.674 21:31:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.674 21:31:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.674 21:31:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.674 21:31:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.674 21:31:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.674 21:31:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.674 21:31:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.674 21:31:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.674 21:31:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.674 21:31:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:01.674 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.674 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 nvme0n1 00:31:01.933 21:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.933 21:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.933 21:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.933 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:01.933 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:01.933 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:01.933 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:01.933 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.933 21:31:56 -- host/auth.sh@44 -- # keyid=2 00:31:01.933 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:01.933 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:01.933 21:31:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:01.933 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:01.933 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:31:01.933 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:01.933 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:01.933 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:01.933 21:31:56 -- host/auth.sh@68 -- # keyid=2 00:31:01.933 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.933 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:01.933 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:01.933 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:01.933 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:01.933 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.933 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.933 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:01.933 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.933 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:01.933 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:01.933 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:01.933 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:01.933 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 nvme0n1 00:31:01.933 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.933 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:01.933 21:31:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:01.933 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.933 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.933 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.933 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.192 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:02.192 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # keyid=3 00:31:02.192 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:02.192 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.192 21:31:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:02.192 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:31:02.192 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # keyid=3 00:31:02.192 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.192 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:02.192 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.192 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.192 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.192 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 nvme0n1 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.192 21:31:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.192 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:02.192 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@44 -- # keyid=4 00:31:02.192 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:02.192 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.192 21:31:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:02.192 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:31:02.192 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:02.192 21:31:56 -- host/auth.sh@68 -- # keyid=4 00:31:02.192 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.192 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.192 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.192 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:02.192 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.192 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.192 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.192 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.192 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.192 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.192 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.451 nvme0n1 00:31:02.451 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.451 21:31:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.451 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.451 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.451 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.451 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.451 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.451 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.451 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.451 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:02.451 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.451 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:02.451 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.451 21:31:56 -- host/auth.sh@44 -- # keyid=0 00:31:02.451 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:02.451 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.451 21:31:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.451 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:02.451 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:31:02.451 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.451 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:02.451 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.451 21:31:56 -- host/auth.sh@68 -- # keyid=0 00:31:02.451 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.451 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.451 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.451 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.451 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:02.451 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.451 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.451 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.451 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.451 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.451 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.451 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.451 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.451 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.451 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:02.451 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.451 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.451 nvme0n1 00:31:02.451 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.451 21:31:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.451 21:31:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.451 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.451 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.710 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:02.710 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # keyid=1 00:31:02.710 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:02.710 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.710 21:31:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:02.710 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:31:02.710 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # keyid=1 00:31:02.710 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.710 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:02.710 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.710 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.710 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.710 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 nvme0n1 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.710 21:31:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.710 21:31:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:02.710 21:31:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # digest=sha384 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@44 -- # keyid=2 00:31:02.710 21:31:56 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:02.710 21:31:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.710 21:31:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:02.710 21:31:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:31:02.710 21:31:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # digest=sha384 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.710 21:31:56 -- host/auth.sh@68 -- # keyid=2 00:31:02.710 21:31:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.710 21:31:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.710 21:31:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.710 21:31:56 -- nvmf/common.sh@717 -- # local ip 00:31:02.710 21:31:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.710 21:31:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.710 21:31:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.710 21:31:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.710 21:31:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.710 21:31:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:02.710 21:31:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.710 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:31:02.970 nvme0n1 00:31:02.970 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.970 21:31:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:02.970 21:31:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.970 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.970 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:02.970 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.970 21:31:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.970 21:31:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.970 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.970 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:02.970 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.970 21:31:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:02.970 21:31:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:02.970 21:31:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:02.970 21:31:57 -- host/auth.sh@44 -- # digest=sha384 00:31:02.970 21:31:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.970 21:31:57 -- host/auth.sh@44 -- # keyid=3 00:31:02.970 21:31:57 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:02.970 21:31:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:02.970 21:31:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:02.970 21:31:57 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:02.970 21:31:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:31:02.970 21:31:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:02.970 21:31:57 -- host/auth.sh@68 -- # digest=sha384 00:31:02.970 21:31:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:02.970 21:31:57 -- host/auth.sh@68 -- # keyid=3 00:31:02.970 21:31:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.970 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.970 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:02.970 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.970 21:31:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:02.970 21:31:57 -- nvmf/common.sh@717 -- # local ip 00:31:02.970 21:31:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:02.970 21:31:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:02.970 21:31:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.970 21:31:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.970 21:31:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:02.970 21:31:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.970 21:31:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:02.970 21:31:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:02.970 21:31:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:02.970 21:31:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:02.970 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.970 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.231 nvme0n1 00:31:03.231 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.231 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.231 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.231 21:31:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.231 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.231 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.231 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.231 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.231 21:31:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:03.231 21:31:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.231 21:31:57 -- host/auth.sh@44 -- # digest=sha384 00:31:03.231 21:31:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:03.231 21:31:57 -- host/auth.sh@44 -- # keyid=4 00:31:03.231 21:31:57 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:03.231 21:31:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:03.231 21:31:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:03.231 21:31:57 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:03.231 21:31:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:31:03.231 21:31:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.231 21:31:57 -- host/auth.sh@68 -- # digest=sha384 00:31:03.231 21:31:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:03.231 21:31:57 -- host/auth.sh@68 -- # keyid=4 00:31:03.231 21:31:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:03.231 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.231 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.231 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.231 21:31:57 -- nvmf/common.sh@717 -- # local ip 00:31:03.231 21:31:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.231 21:31:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.231 21:31:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.231 21:31:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.231 21:31:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.231 21:31:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.231 21:31:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.231 21:31:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.231 21:31:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.231 21:31:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:03.231 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.231 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.231 nvme0n1 00:31:03.231 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.231 21:31:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.231 21:31:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.231 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.231 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.491 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.491 21:31:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.492 21:31:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.492 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.492 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.492 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.492 21:31:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:03.492 21:31:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.492 21:31:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:03.492 21:31:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.492 21:31:57 -- host/auth.sh@44 -- # digest=sha384 00:31:03.492 21:31:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.492 21:31:57 -- host/auth.sh@44 -- # keyid=0 00:31:03.492 21:31:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:03.492 21:31:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:03.492 21:31:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:03.492 21:31:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:03.492 21:31:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:31:03.492 21:31:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.492 21:31:57 -- host/auth.sh@68 -- # digest=sha384 00:31:03.492 21:31:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:03.492 21:31:57 -- host/auth.sh@68 -- # keyid=0 00:31:03.492 21:31:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:03.492 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.492 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.492 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.492 21:31:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.492 21:31:57 -- nvmf/common.sh@717 -- # local ip 00:31:03.492 21:31:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.492 21:31:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.492 21:31:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.492 21:31:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.492 21:31:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.492 21:31:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.492 21:31:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.492 21:31:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.492 21:31:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.492 21:31:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:03.492 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.492 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.492 nvme0n1 00:31:03.492 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.492 21:31:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.492 21:31:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.492 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.492 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.752 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.752 21:31:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.752 21:31:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.752 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.752 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.752 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.752 21:31:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:03.752 21:31:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:03.752 21:31:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:03.752 21:31:57 -- host/auth.sh@44 -- # digest=sha384 00:31:03.752 21:31:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.752 21:31:57 -- host/auth.sh@44 -- # keyid=1 00:31:03.752 21:31:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:03.752 21:31:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:03.752 21:31:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:03.752 21:31:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:03.752 21:31:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:31:03.752 21:31:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:03.752 21:31:57 -- host/auth.sh@68 -- # digest=sha384 00:31:03.752 21:31:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:03.752 21:31:57 -- host/auth.sh@68 -- # keyid=1 00:31:03.752 21:31:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:03.752 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.752 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.752 21:31:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.752 21:31:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:03.752 21:31:57 -- nvmf/common.sh@717 -- # local ip 00:31:03.752 21:31:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:03.752 21:31:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:03.752 21:31:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.752 21:31:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.752 21:31:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:03.752 21:31:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.752 21:31:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:03.752 21:31:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:03.752 21:31:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:03.752 21:31:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:03.752 21:31:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.752 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:03.752 nvme0n1 00:31:03.752 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:03.752 21:31:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.752 21:31:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:03.752 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:03.752 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.011 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.011 21:31:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.011 21:31:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.011 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.011 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.011 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.011 21:31:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.011 21:31:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:04.011 21:31:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.011 21:31:58 -- host/auth.sh@44 -- # digest=sha384 00:31:04.011 21:31:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.011 21:31:58 -- host/auth.sh@44 -- # keyid=2 00:31:04.011 21:31:58 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:04.011 21:31:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:04.011 21:31:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:04.011 21:31:58 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:04.011 21:31:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:31:04.011 21:31:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.011 21:31:58 -- host/auth.sh@68 -- # digest=sha384 00:31:04.011 21:31:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:04.011 21:31:58 -- host/auth.sh@68 -- # keyid=2 00:31:04.011 21:31:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.011 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.011 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.011 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.011 21:31:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.011 21:31:58 -- nvmf/common.sh@717 -- # local ip 00:31:04.011 21:31:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.011 21:31:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.011 21:31:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.011 21:31:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.011 21:31:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.011 21:31:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.011 21:31:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.011 21:31:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.011 21:31:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.011 21:31:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:04.011 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.011 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.011 nvme0n1 00:31:04.011 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.011 21:31:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.011 21:31:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.011 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.011 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.269 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.269 21:31:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.269 21:31:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.269 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.269 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.269 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.269 21:31:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.269 21:31:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:04.269 21:31:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.269 21:31:58 -- host/auth.sh@44 -- # digest=sha384 00:31:04.269 21:31:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.269 21:31:58 -- host/auth.sh@44 -- # keyid=3 00:31:04.269 21:31:58 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:04.269 21:31:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:04.269 21:31:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:04.269 21:31:58 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:04.269 21:31:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:31:04.269 21:31:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.269 21:31:58 -- host/auth.sh@68 -- # digest=sha384 00:31:04.269 21:31:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:04.269 21:31:58 -- host/auth.sh@68 -- # keyid=3 00:31:04.269 21:31:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.269 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.269 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.269 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.269 21:31:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.269 21:31:58 -- nvmf/common.sh@717 -- # local ip 00:31:04.269 21:31:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.269 21:31:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.269 21:31:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.269 21:31:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.269 21:31:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.269 21:31:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.269 21:31:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.269 21:31:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.269 21:31:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.269 21:31:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:04.269 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.269 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.269 nvme0n1 00:31:04.270 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.528 21:31:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.528 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.528 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.528 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.528 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.528 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.528 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.528 21:31:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:04.528 21:31:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.528 21:31:58 -- host/auth.sh@44 -- # digest=sha384 00:31:04.528 21:31:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.528 21:31:58 -- host/auth.sh@44 -- # keyid=4 00:31:04.528 21:31:58 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:04.528 21:31:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:04.528 21:31:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:04.528 21:31:58 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:04.528 21:31:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:31:04.528 21:31:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.528 21:31:58 -- host/auth.sh@68 -- # digest=sha384 00:31:04.528 21:31:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:04.528 21:31:58 -- host/auth.sh@68 -- # keyid=4 00:31:04.528 21:31:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.528 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.528 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.528 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.528 21:31:58 -- nvmf/common.sh@717 -- # local ip 00:31:04.528 21:31:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.528 21:31:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.528 21:31:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.528 21:31:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.528 21:31:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.528 21:31:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.528 21:31:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.528 21:31:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.528 21:31:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.528 21:31:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:04.528 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.528 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.528 nvme0n1 00:31:04.528 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.528 21:31:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.528 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.528 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.787 21:31:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:04.787 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.787 21:31:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.787 21:31:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.787 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.787 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.787 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.787 21:31:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:04.787 21:31:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:04.787 21:31:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:04.787 21:31:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:04.787 21:31:58 -- host/auth.sh@44 -- # digest=sha384 00:31:04.787 21:31:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:04.787 21:31:58 -- host/auth.sh@44 -- # keyid=0 00:31:04.787 21:31:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:04.787 21:31:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:04.787 21:31:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:04.787 21:31:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:04.787 21:31:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:31:04.787 21:31:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:04.787 21:31:58 -- host/auth.sh@68 -- # digest=sha384 00:31:04.787 21:31:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:04.787 21:31:58 -- host/auth.sh@68 -- # keyid=0 00:31:04.787 21:31:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:04.787 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.787 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.787 21:31:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.787 21:31:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:04.787 21:31:58 -- nvmf/common.sh@717 -- # local ip 00:31:04.787 21:31:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:04.787 21:31:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:04.787 21:31:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.787 21:31:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.787 21:31:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:04.787 21:31:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.787 21:31:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:04.787 21:31:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:04.787 21:31:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:04.787 21:31:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:04.787 21:31:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.787 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 nvme0n1 00:31:05.047 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 21:31:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.047 21:31:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:05.047 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 21:31:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.047 21:31:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.047 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 21:31:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:05.047 21:31:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:05.047 21:31:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:05.047 21:31:59 -- host/auth.sh@44 -- # digest=sha384 00:31:05.047 21:31:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.047 21:31:59 -- host/auth.sh@44 -- # keyid=1 00:31:05.047 21:31:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:05.047 21:31:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:05.047 21:31:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:05.047 21:31:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:05.047 21:31:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:31:05.047 21:31:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:05.047 21:31:59 -- host/auth.sh@68 -- # digest=sha384 00:31:05.047 21:31:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:05.047 21:31:59 -- host/auth.sh@68 -- # keyid=1 00:31:05.047 21:31:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:05.047 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.047 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.047 21:31:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:05.047 21:31:59 -- nvmf/common.sh@717 -- # local ip 00:31:05.047 21:31:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:05.047 21:31:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:05.047 21:31:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.047 21:31:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.047 21:31:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:05.047 21:31:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.047 21:31:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:05.047 21:31:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:05.047 21:31:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:05.047 21:31:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:05.047 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.047 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.617 nvme0n1 00:31:05.617 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.617 21:31:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.617 21:31:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:05.617 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.617 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.617 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.617 21:31:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.617 21:31:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.617 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.617 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.617 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.617 21:31:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:05.617 21:31:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:05.617 21:31:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:05.617 21:31:59 -- host/auth.sh@44 -- # digest=sha384 00:31:05.617 21:31:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.617 21:31:59 -- host/auth.sh@44 -- # keyid=2 00:31:05.617 21:31:59 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:05.617 21:31:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:05.617 21:31:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:05.617 21:31:59 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:05.617 21:31:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:31:05.617 21:31:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:05.617 21:31:59 -- host/auth.sh@68 -- # digest=sha384 00:31:05.617 21:31:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:05.617 21:31:59 -- host/auth.sh@68 -- # keyid=2 00:31:05.617 21:31:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:05.617 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.617 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.617 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.617 21:31:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:05.617 21:31:59 -- nvmf/common.sh@717 -- # local ip 00:31:05.617 21:31:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:05.617 21:31:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:05.617 21:31:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.617 21:31:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.617 21:31:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:05.617 21:31:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.617 21:31:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:05.617 21:31:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:05.617 21:31:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:05.617 21:31:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:05.617 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.617 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:05.877 nvme0n1 00:31:05.877 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.877 21:32:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.877 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.877 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:05.877 21:32:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:05.877 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.877 21:32:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.877 21:32:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.877 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.877 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:05.877 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.877 21:32:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:05.877 21:32:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:05.877 21:32:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:05.878 21:32:00 -- host/auth.sh@44 -- # digest=sha384 00:31:05.878 21:32:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.878 21:32:00 -- host/auth.sh@44 -- # keyid=3 00:31:05.878 21:32:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:05.878 21:32:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:05.878 21:32:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:05.878 21:32:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:05.878 21:32:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:31:05.878 21:32:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:05.878 21:32:00 -- host/auth.sh@68 -- # digest=sha384 00:31:05.878 21:32:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:05.878 21:32:00 -- host/auth.sh@68 -- # keyid=3 00:31:05.878 21:32:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:05.878 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.878 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:05.878 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.878 21:32:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:05.878 21:32:00 -- nvmf/common.sh@717 -- # local ip 00:31:05.878 21:32:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:05.878 21:32:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:05.878 21:32:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.878 21:32:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.878 21:32:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:05.878 21:32:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.878 21:32:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:05.878 21:32:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:05.878 21:32:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:05.878 21:32:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:05.878 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.878 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.137 nvme0n1 00:31:06.137 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.137 21:32:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.137 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.137 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.395 21:32:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:06.395 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.395 21:32:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.395 21:32:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.395 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.395 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.395 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.395 21:32:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:06.395 21:32:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:06.395 21:32:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:06.395 21:32:00 -- host/auth.sh@44 -- # digest=sha384 00:31:06.395 21:32:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.395 21:32:00 -- host/auth.sh@44 -- # keyid=4 00:31:06.395 21:32:00 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:06.395 21:32:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:06.395 21:32:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:06.395 21:32:00 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:06.395 21:32:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:31:06.395 21:32:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:06.395 21:32:00 -- host/auth.sh@68 -- # digest=sha384 00:31:06.395 21:32:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:06.395 21:32:00 -- host/auth.sh@68 -- # keyid=4 00:31:06.395 21:32:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:06.395 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.395 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.395 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.395 21:32:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:06.395 21:32:00 -- nvmf/common.sh@717 -- # local ip 00:31:06.395 21:32:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:06.395 21:32:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:06.395 21:32:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.395 21:32:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.395 21:32:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:06.395 21:32:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.395 21:32:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:06.395 21:32:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:06.395 21:32:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:06.395 21:32:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:06.395 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.395 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.685 nvme0n1 00:31:06.685 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.685 21:32:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.685 21:32:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:06.685 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.685 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.685 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.685 21:32:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.685 21:32:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.685 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.685 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.685 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.685 21:32:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:06.685 21:32:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:06.685 21:32:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:06.685 21:32:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:06.685 21:32:00 -- host/auth.sh@44 -- # digest=sha384 00:31:06.685 21:32:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:06.685 21:32:00 -- host/auth.sh@44 -- # keyid=0 00:31:06.685 21:32:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:06.685 21:32:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:06.685 21:32:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:06.685 21:32:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:06.685 21:32:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:31:06.685 21:32:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:06.685 21:32:00 -- host/auth.sh@68 -- # digest=sha384 00:31:06.685 21:32:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:06.685 21:32:00 -- host/auth.sh@68 -- # keyid=0 00:31:06.685 21:32:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:06.685 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.685 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:06.685 21:32:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.685 21:32:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:06.685 21:32:00 -- nvmf/common.sh@717 -- # local ip 00:31:06.685 21:32:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:06.685 21:32:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:06.685 21:32:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.685 21:32:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.685 21:32:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:06.685 21:32:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.685 21:32:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:06.685 21:32:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:06.685 21:32:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:06.685 21:32:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:06.685 21:32:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.685 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:31:07.356 nvme0n1 00:31:07.356 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.356 21:32:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.356 21:32:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:07.356 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.356 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:31:07.356 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.356 21:32:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.356 21:32:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.356 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.356 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:31:07.356 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.356 21:32:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:07.356 21:32:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:07.356 21:32:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:07.356 21:32:01 -- host/auth.sh@44 -- # digest=sha384 00:31:07.356 21:32:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.356 21:32:01 -- host/auth.sh@44 -- # keyid=1 00:31:07.356 21:32:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:07.356 21:32:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:07.356 21:32:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:07.356 21:32:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:07.356 21:32:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:31:07.356 21:32:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:07.356 21:32:01 -- host/auth.sh@68 -- # digest=sha384 00:31:07.356 21:32:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:07.356 21:32:01 -- host/auth.sh@68 -- # keyid=1 00:31:07.356 21:32:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:07.356 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.356 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:31:07.356 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.356 21:32:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:07.356 21:32:01 -- nvmf/common.sh@717 -- # local ip 00:31:07.356 21:32:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:07.356 21:32:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:07.356 21:32:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.356 21:32:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.356 21:32:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:07.356 21:32:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.356 21:32:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:07.356 21:32:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:07.356 21:32:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:07.356 21:32:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:07.356 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.356 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:31:07.928 nvme0n1 00:31:07.928 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.928 21:32:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.928 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.928 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:07.928 21:32:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:07.928 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.928 21:32:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.928 21:32:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.928 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.928 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:07.928 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.928 21:32:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:07.928 21:32:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:07.928 21:32:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:07.928 21:32:02 -- host/auth.sh@44 -- # digest=sha384 00:31:07.928 21:32:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.928 21:32:02 -- host/auth.sh@44 -- # keyid=2 00:31:07.928 21:32:02 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:07.928 21:32:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:07.928 21:32:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:07.928 21:32:02 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:07.928 21:32:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:31:07.928 21:32:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:07.928 21:32:02 -- host/auth.sh@68 -- # digest=sha384 00:31:07.928 21:32:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:07.928 21:32:02 -- host/auth.sh@68 -- # keyid=2 00:31:07.928 21:32:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:07.928 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.928 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:07.928 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.928 21:32:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:07.928 21:32:02 -- nvmf/common.sh@717 -- # local ip 00:31:07.928 21:32:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:07.928 21:32:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:07.928 21:32:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.928 21:32:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.928 21:32:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:07.928 21:32:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.928 21:32:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:07.928 21:32:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:07.928 21:32:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:07.928 21:32:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:07.928 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.928 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:08.496 nvme0n1 00:31:08.496 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.496 21:32:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.496 21:32:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:08.496 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.496 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:08.496 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.496 21:32:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.496 21:32:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.496 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.496 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:08.496 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.496 21:32:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:08.496 21:32:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:08.496 21:32:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:08.496 21:32:02 -- host/auth.sh@44 -- # digest=sha384 00:31:08.496 21:32:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:08.496 21:32:02 -- host/auth.sh@44 -- # keyid=3 00:31:08.496 21:32:02 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:08.496 21:32:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:08.496 21:32:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:08.496 21:32:02 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:08.496 21:32:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:31:08.496 21:32:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:08.496 21:32:02 -- host/auth.sh@68 -- # digest=sha384 00:31:08.496 21:32:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:08.496 21:32:02 -- host/auth.sh@68 -- # keyid=3 00:31:08.496 21:32:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:08.496 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.496 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:08.496 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:08.496 21:32:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:08.496 21:32:02 -- nvmf/common.sh@717 -- # local ip 00:31:08.496 21:32:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:08.496 21:32:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:08.496 21:32:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.496 21:32:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.496 21:32:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:08.496 21:32:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.496 21:32:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:08.496 21:32:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:08.496 21:32:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:08.496 21:32:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:08.496 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:08.496 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:09.439 nvme0n1 00:31:09.439 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.439 21:32:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.439 21:32:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:09.439 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.439 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.439 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.439 21:32:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.439 21:32:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.439 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.439 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.439 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.439 21:32:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:09.439 21:32:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:09.439 21:32:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:09.439 21:32:03 -- host/auth.sh@44 -- # digest=sha384 00:31:09.439 21:32:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:09.439 21:32:03 -- host/auth.sh@44 -- # keyid=4 00:31:09.439 21:32:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:09.439 21:32:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:09.439 21:32:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:09.439 21:32:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:09.439 21:32:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:31:09.439 21:32:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:09.439 21:32:03 -- host/auth.sh@68 -- # digest=sha384 00:31:09.439 21:32:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:09.439 21:32:03 -- host/auth.sh@68 -- # keyid=4 00:31:09.439 21:32:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:09.439 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.439 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.439 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.439 21:32:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:09.439 21:32:03 -- nvmf/common.sh@717 -- # local ip 00:31:09.439 21:32:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:09.439 21:32:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:09.439 21:32:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.439 21:32:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.439 21:32:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:09.439 21:32:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.439 21:32:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:09.439 21:32:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:09.439 21:32:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:09.439 21:32:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:09.439 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.439 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 nvme0n1 00:31:10.009 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.009 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.009 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:10.009 21:32:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.009 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.009 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:10.009 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # keyid=0 00:31:10.009 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:10.009 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.009 21:32:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:10.009 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:31:10.009 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # keyid=0 00:31:10.009 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.009 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.009 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.009 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.009 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.009 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 nvme0n1 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.009 21:32:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.009 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:10.009 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@44 -- # keyid=1 00:31:10.009 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:10.009 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.009 21:32:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:10.009 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:31:10.009 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:10.009 21:32:04 -- host/auth.sh@68 -- # keyid=1 00:31:10.009 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.009 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.009 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.009 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.009 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.009 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.009 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.009 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.009 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.009 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:10.009 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.009 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.270 nvme0n1 00:31:10.270 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.270 21:32:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.270 21:32:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.270 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.271 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.271 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:10.271 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.271 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.271 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.271 21:32:04 -- host/auth.sh@44 -- # keyid=2 00:31:10.271 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:10.271 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.271 21:32:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:10.271 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:10.271 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:31:10.271 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.271 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.271 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:10.271 21:32:04 -- host/auth.sh@68 -- # keyid=2 00:31:10.271 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.271 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.271 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.271 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.271 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.271 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.271 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.271 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.271 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.271 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.271 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.271 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.271 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:10.271 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 nvme0n1 00:31:10.271 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.271 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 21:32:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.271 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.271 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.271 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.271 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.530 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.530 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.530 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:10.531 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # keyid=3 00:31:10.531 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:10.531 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.531 21:32:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:10.531 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:31:10.531 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # keyid=3 00:31:10.531 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.531 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.531 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.531 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.531 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.531 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.531 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.531 nvme0n1 00:31:10.531 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.531 21:32:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.531 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.531 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.531 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:10.531 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@44 -- # keyid=4 00:31:10.531 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:10.531 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.531 21:32:04 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:10.531 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:31:10.531 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:10.531 21:32:04 -- host/auth.sh@68 -- # keyid=4 00:31:10.531 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.531 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.531 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.531 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.531 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.531 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.531 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.531 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.531 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.531 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:10.531 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.531 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 nvme0n1 00:31:10.790 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.790 21:32:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.790 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.790 21:32:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.790 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.790 21:32:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.790 21:32:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.790 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.790 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.790 21:32:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.790 21:32:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:10.790 21:32:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:10.790 21:32:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:10.790 21:32:04 -- host/auth.sh@44 -- # digest=sha512 00:31:10.790 21:32:04 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:10.790 21:32:04 -- host/auth.sh@44 -- # keyid=0 00:31:10.790 21:32:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:10.790 21:32:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:10.790 21:32:04 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:10.790 21:32:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:10.790 21:32:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:31:10.790 21:32:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:10.790 21:32:04 -- host/auth.sh@68 -- # digest=sha512 00:31:10.790 21:32:04 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:10.790 21:32:04 -- host/auth.sh@68 -- # keyid=0 00:31:10.790 21:32:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:10.790 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.790 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.790 21:32:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:10.790 21:32:04 -- nvmf/common.sh@717 -- # local ip 00:31:10.790 21:32:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:10.790 21:32:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:10.790 21:32:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.790 21:32:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.790 21:32:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:10.790 21:32:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.790 21:32:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:10.790 21:32:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:10.790 21:32:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:10.790 21:32:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:10.790 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.790 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:31:10.790 nvme0n1 00:31:10.790 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.790 21:32:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.790 21:32:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:10.790 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.790 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:11.049 21:32:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:11.049 21:32:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # digest=sha512 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # keyid=1 00:31:11.049 21:32:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:11.049 21:32:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:11.049 21:32:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:11.049 21:32:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:31:11.049 21:32:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # digest=sha512 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # keyid=1 00:31:11.049 21:32:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:11.049 21:32:05 -- nvmf/common.sh@717 -- # local ip 00:31:11.049 21:32:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.049 21:32:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.049 21:32:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.049 21:32:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.049 21:32:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.049 21:32:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.049 21:32:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.049 21:32:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.049 21:32:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.049 21:32:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 nvme0n1 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.049 21:32:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:11.049 21:32:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:11.049 21:32:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # digest=sha512 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@44 -- # keyid=2 00:31:11.049 21:32:05 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:11.049 21:32:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:11.049 21:32:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:11.049 21:32:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:31:11.049 21:32:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # digest=sha512 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:11.049 21:32:05 -- host/auth.sh@68 -- # keyid=2 00:31:11.049 21:32:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.049 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.049 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.049 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.049 21:32:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:11.049 21:32:05 -- nvmf/common.sh@717 -- # local ip 00:31:11.049 21:32:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.049 21:32:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.049 21:32:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.050 21:32:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.050 21:32:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.050 21:32:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.050 21:32:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.050 21:32:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.050 21:32:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.050 21:32:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:11.050 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.050 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.308 nvme0n1 00:31:11.308 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.308 21:32:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.308 21:32:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:11.308 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.308 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.308 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.308 21:32:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.308 21:32:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.308 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.308 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.308 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.308 21:32:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:11.308 21:32:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:11.308 21:32:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.308 21:32:05 -- host/auth.sh@44 -- # digest=sha512 00:31:11.308 21:32:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:11.309 21:32:05 -- host/auth.sh@44 -- # keyid=3 00:31:11.309 21:32:05 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:11.309 21:32:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:11.309 21:32:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:11.309 21:32:05 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:11.309 21:32:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:31:11.309 21:32:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:11.309 21:32:05 -- host/auth.sh@68 -- # digest=sha512 00:31:11.309 21:32:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:11.309 21:32:05 -- host/auth.sh@68 -- # keyid=3 00:31:11.309 21:32:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.309 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.309 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.309 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.309 21:32:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:11.309 21:32:05 -- nvmf/common.sh@717 -- # local ip 00:31:11.309 21:32:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.309 21:32:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.309 21:32:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.309 21:32:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.309 21:32:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.309 21:32:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.309 21:32:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.309 21:32:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.309 21:32:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.309 21:32:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:11.309 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.309 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.568 nvme0n1 00:31:11.568 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.568 21:32:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.568 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.568 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.568 21:32:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:11.568 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.568 21:32:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.568 21:32:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.568 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.568 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.568 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.568 21:32:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:11.568 21:32:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:11.568 21:32:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.568 21:32:05 -- host/auth.sh@44 -- # digest=sha512 00:31:11.568 21:32:05 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:11.568 21:32:05 -- host/auth.sh@44 -- # keyid=4 00:31:11.568 21:32:05 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:11.568 21:32:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:11.568 21:32:05 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:11.568 21:32:05 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:11.568 21:32:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:31:11.568 21:32:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:11.568 21:32:05 -- host/auth.sh@68 -- # digest=sha512 00:31:11.568 21:32:05 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:11.568 21:32:05 -- host/auth.sh@68 -- # keyid=4 00:31:11.568 21:32:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:11.568 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.568 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.568 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.568 21:32:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:11.569 21:32:05 -- nvmf/common.sh@717 -- # local ip 00:31:11.569 21:32:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.569 21:32:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.569 21:32:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.569 21:32:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.569 21:32:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.569 21:32:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.569 21:32:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.569 21:32:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.569 21:32:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.569 21:32:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:11.569 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.569 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.569 nvme0n1 00:31:11.569 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.830 21:32:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.830 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.830 21:32:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:11.830 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.830 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.830 21:32:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.830 21:32:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.830 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.830 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.830 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.830 21:32:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:11.830 21:32:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:11.830 21:32:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:11.830 21:32:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:11.830 21:32:05 -- host/auth.sh@44 -- # digest=sha512 00:31:11.830 21:32:05 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:11.830 21:32:05 -- host/auth.sh@44 -- # keyid=0 00:31:11.830 21:32:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:11.830 21:32:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:11.830 21:32:05 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:11.830 21:32:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:11.830 21:32:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:31:11.830 21:32:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:11.830 21:32:05 -- host/auth.sh@68 -- # digest=sha512 00:31:11.830 21:32:05 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:11.830 21:32:05 -- host/auth.sh@68 -- # keyid=0 00:31:11.830 21:32:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:11.830 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.830 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.830 21:32:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:11.830 21:32:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:11.830 21:32:05 -- nvmf/common.sh@717 -- # local ip 00:31:11.830 21:32:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:11.830 21:32:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:11.830 21:32:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.830 21:32:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.830 21:32:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:11.830 21:32:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.830 21:32:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:11.830 21:32:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:11.830 21:32:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:11.830 21:32:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:11.830 21:32:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:11.830 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:31:12.091 nvme0n1 00:31:12.091 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.091 21:32:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.091 21:32:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:12.091 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.091 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.091 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.091 21:32:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.091 21:32:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.091 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.091 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.091 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.091 21:32:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:12.091 21:32:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:12.091 21:32:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:12.091 21:32:06 -- host/auth.sh@44 -- # digest=sha512 00:31:12.091 21:32:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:12.091 21:32:06 -- host/auth.sh@44 -- # keyid=1 00:31:12.091 21:32:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:12.091 21:32:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:12.091 21:32:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:12.091 21:32:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:12.091 21:32:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:31:12.091 21:32:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:12.091 21:32:06 -- host/auth.sh@68 -- # digest=sha512 00:31:12.091 21:32:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:12.091 21:32:06 -- host/auth.sh@68 -- # keyid=1 00:31:12.091 21:32:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:12.091 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.091 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.091 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.091 21:32:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:12.091 21:32:06 -- nvmf/common.sh@717 -- # local ip 00:31:12.091 21:32:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:12.091 21:32:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:12.091 21:32:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.091 21:32:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.091 21:32:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:12.091 21:32:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.091 21:32:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:12.091 21:32:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:12.091 21:32:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:12.091 21:32:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:12.091 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.091 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.352 nvme0n1 00:31:12.352 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.352 21:32:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.352 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.352 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.352 21:32:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:12.352 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.352 21:32:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.352 21:32:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.352 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.352 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.352 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.352 21:32:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:12.352 21:32:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:12.352 21:32:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:12.352 21:32:06 -- host/auth.sh@44 -- # digest=sha512 00:31:12.352 21:32:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:12.352 21:32:06 -- host/auth.sh@44 -- # keyid=2 00:31:12.352 21:32:06 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:12.352 21:32:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:12.352 21:32:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:12.352 21:32:06 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:12.352 21:32:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:31:12.352 21:32:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:12.352 21:32:06 -- host/auth.sh@68 -- # digest=sha512 00:31:12.352 21:32:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:12.352 21:32:06 -- host/auth.sh@68 -- # keyid=2 00:31:12.352 21:32:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:12.352 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.352 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.352 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.352 21:32:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:12.352 21:32:06 -- nvmf/common.sh@717 -- # local ip 00:31:12.352 21:32:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:12.352 21:32:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:12.352 21:32:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.352 21:32:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.352 21:32:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:12.352 21:32:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.352 21:32:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:12.352 21:32:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:12.352 21:32:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:12.352 21:32:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:12.352 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.352 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.613 nvme0n1 00:31:12.613 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.613 21:32:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.613 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.613 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.613 21:32:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:12.613 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.613 21:32:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.613 21:32:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.613 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.613 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.613 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.613 21:32:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:12.613 21:32:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:12.613 21:32:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:12.613 21:32:06 -- host/auth.sh@44 -- # digest=sha512 00:31:12.613 21:32:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:12.613 21:32:06 -- host/auth.sh@44 -- # keyid=3 00:31:12.613 21:32:06 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:12.613 21:32:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:12.613 21:32:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:12.613 21:32:06 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:12.613 21:32:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:31:12.613 21:32:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:12.613 21:32:06 -- host/auth.sh@68 -- # digest=sha512 00:31:12.613 21:32:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:12.613 21:32:06 -- host/auth.sh@68 -- # keyid=3 00:31:12.613 21:32:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:12.613 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.613 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.613 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.613 21:32:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:12.613 21:32:06 -- nvmf/common.sh@717 -- # local ip 00:31:12.613 21:32:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:12.613 21:32:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:12.613 21:32:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.613 21:32:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.613 21:32:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:12.613 21:32:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.613 21:32:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:12.613 21:32:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:12.613 21:32:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:12.613 21:32:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:12.613 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.613 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.871 nvme0n1 00:31:12.871 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.871 21:32:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.871 21:32:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:12.871 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.871 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.871 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.871 21:32:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.871 21:32:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.871 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.871 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.871 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.871 21:32:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:12.871 21:32:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:12.871 21:32:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:12.872 21:32:06 -- host/auth.sh@44 -- # digest=sha512 00:31:12.872 21:32:06 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:12.872 21:32:06 -- host/auth.sh@44 -- # keyid=4 00:31:12.872 21:32:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:12.872 21:32:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:12.872 21:32:06 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:12.872 21:32:06 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:12.872 21:32:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:31:12.872 21:32:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:12.872 21:32:06 -- host/auth.sh@68 -- # digest=sha512 00:31:12.872 21:32:06 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:12.872 21:32:06 -- host/auth.sh@68 -- # keyid=4 00:31:12.872 21:32:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:12.872 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.872 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:12.872 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.872 21:32:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:12.872 21:32:06 -- nvmf/common.sh@717 -- # local ip 00:31:12.872 21:32:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:12.872 21:32:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:12.872 21:32:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.872 21:32:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.872 21:32:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:12.872 21:32:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.872 21:32:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:12.872 21:32:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:12.872 21:32:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:12.872 21:32:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:12.872 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.872 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:31:13.131 nvme0n1 00:31:13.131 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.131 21:32:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.131 21:32:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:13.131 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.131 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.131 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.131 21:32:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.131 21:32:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.131 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.131 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.131 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.131 21:32:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:13.131 21:32:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:13.131 21:32:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:13.131 21:32:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:13.131 21:32:07 -- host/auth.sh@44 -- # digest=sha512 00:31:13.131 21:32:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:13.131 21:32:07 -- host/auth.sh@44 -- # keyid=0 00:31:13.131 21:32:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:13.131 21:32:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:13.131 21:32:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:13.131 21:32:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:13.131 21:32:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:31:13.131 21:32:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:13.131 21:32:07 -- host/auth.sh@68 -- # digest=sha512 00:31:13.131 21:32:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:13.131 21:32:07 -- host/auth.sh@68 -- # keyid=0 00:31:13.131 21:32:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:13.131 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.131 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.131 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.131 21:32:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:13.131 21:32:07 -- nvmf/common.sh@717 -- # local ip 00:31:13.131 21:32:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:13.131 21:32:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:13.131 21:32:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.131 21:32:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.131 21:32:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:13.131 21:32:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.131 21:32:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:13.131 21:32:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:13.131 21:32:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:13.131 21:32:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:13.131 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.131 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.390 nvme0n1 00:31:13.390 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.390 21:32:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.390 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.390 21:32:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:13.390 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.390 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.390 21:32:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.390 21:32:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.390 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.390 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.390 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.390 21:32:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:13.391 21:32:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:13.391 21:32:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:13.391 21:32:07 -- host/auth.sh@44 -- # digest=sha512 00:31:13.391 21:32:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:13.391 21:32:07 -- host/auth.sh@44 -- # keyid=1 00:31:13.391 21:32:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:13.391 21:32:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:13.391 21:32:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:13.391 21:32:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:13.391 21:32:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:31:13.391 21:32:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:13.391 21:32:07 -- host/auth.sh@68 -- # digest=sha512 00:31:13.391 21:32:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:13.391 21:32:07 -- host/auth.sh@68 -- # keyid=1 00:31:13.391 21:32:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:13.391 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.391 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.391 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.391 21:32:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:13.391 21:32:07 -- nvmf/common.sh@717 -- # local ip 00:31:13.391 21:32:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:13.391 21:32:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:13.391 21:32:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.391 21:32:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.391 21:32:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:13.391 21:32:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.391 21:32:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:13.391 21:32:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:13.391 21:32:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:13.391 21:32:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:13.391 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.391 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.959 nvme0n1 00:31:13.959 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.959 21:32:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.959 21:32:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:13.959 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.959 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.959 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.959 21:32:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.959 21:32:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.959 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.959 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.959 21:32:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.959 21:32:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:13.959 21:32:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:13.959 21:32:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:13.959 21:32:07 -- host/auth.sh@44 -- # digest=sha512 00:31:13.959 21:32:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:13.959 21:32:07 -- host/auth.sh@44 -- # keyid=2 00:31:13.959 21:32:07 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:13.959 21:32:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:13.959 21:32:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:13.959 21:32:07 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:13.959 21:32:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:31:13.959 21:32:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:13.959 21:32:07 -- host/auth.sh@68 -- # digest=sha512 00:31:13.959 21:32:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:13.959 21:32:07 -- host/auth.sh@68 -- # keyid=2 00:31:13.959 21:32:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:13.959 21:32:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.959 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:31:13.959 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.959 21:32:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:13.959 21:32:08 -- nvmf/common.sh@717 -- # local ip 00:31:13.959 21:32:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:13.959 21:32:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:13.959 21:32:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.959 21:32:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.959 21:32:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:13.959 21:32:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.959 21:32:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:13.959 21:32:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:13.959 21:32:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:13.959 21:32:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:13.959 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.959 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.219 nvme0n1 00:31:14.219 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.219 21:32:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.219 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.219 21:32:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:14.219 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.219 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.219 21:32:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.219 21:32:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.219 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.219 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.219 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.219 21:32:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:14.219 21:32:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:14.219 21:32:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:14.219 21:32:08 -- host/auth.sh@44 -- # digest=sha512 00:31:14.219 21:32:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:14.220 21:32:08 -- host/auth.sh@44 -- # keyid=3 00:31:14.220 21:32:08 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:14.220 21:32:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:14.220 21:32:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:14.220 21:32:08 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:14.220 21:32:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:31:14.220 21:32:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:14.220 21:32:08 -- host/auth.sh@68 -- # digest=sha512 00:31:14.220 21:32:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:14.220 21:32:08 -- host/auth.sh@68 -- # keyid=3 00:31:14.220 21:32:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.220 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.220 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.220 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.220 21:32:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:14.220 21:32:08 -- nvmf/common.sh@717 -- # local ip 00:31:14.220 21:32:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:14.220 21:32:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:14.220 21:32:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.220 21:32:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.220 21:32:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:14.220 21:32:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.220 21:32:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:14.220 21:32:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:14.220 21:32:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:14.220 21:32:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:14.220 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.220 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.481 nvme0n1 00:31:14.481 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.742 21:32:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:14.742 21:32:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.742 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.742 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.742 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.742 21:32:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.742 21:32:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.742 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.742 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.742 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.742 21:32:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:14.742 21:32:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:14.742 21:32:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:14.742 21:32:08 -- host/auth.sh@44 -- # digest=sha512 00:31:14.742 21:32:08 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:14.742 21:32:08 -- host/auth.sh@44 -- # keyid=4 00:31:14.742 21:32:08 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:14.742 21:32:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:14.742 21:32:08 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:14.742 21:32:08 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:14.742 21:32:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:31:14.742 21:32:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:14.742 21:32:08 -- host/auth.sh@68 -- # digest=sha512 00:31:14.742 21:32:08 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:14.742 21:32:08 -- host/auth.sh@68 -- # keyid=4 00:31:14.742 21:32:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:14.742 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.742 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:14.742 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.742 21:32:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:14.742 21:32:08 -- nvmf/common.sh@717 -- # local ip 00:31:14.742 21:32:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:14.742 21:32:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:14.742 21:32:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.742 21:32:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.742 21:32:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:14.742 21:32:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.742 21:32:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:14.742 21:32:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:14.742 21:32:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:14.742 21:32:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:14.742 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.742 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:15.001 nvme0n1 00:31:15.001 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.001 21:32:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.001 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.001 21:32:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:15.001 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.001 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.001 21:32:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.001 21:32:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.001 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.001 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.001 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.001 21:32:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.001 21:32:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:15.001 21:32:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:15.001 21:32:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:15.001 21:32:09 -- host/auth.sh@44 -- # digest=sha512 00:31:15.001 21:32:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.001 21:32:09 -- host/auth.sh@44 -- # keyid=0 00:31:15.002 21:32:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:15.002 21:32:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:15.002 21:32:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:15.002 21:32:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MmMwYmQ2ODQ5NGM0ZWVmNjYxNThhMTIyODc5MGM2ZWHGiirJ: 00:31:15.002 21:32:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:31:15.002 21:32:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:15.002 21:32:09 -- host/auth.sh@68 -- # digest=sha512 00:31:15.002 21:32:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:15.002 21:32:09 -- host/auth.sh@68 -- # keyid=0 00:31:15.002 21:32:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:15.002 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.002 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.002 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.002 21:32:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:15.002 21:32:09 -- nvmf/common.sh@717 -- # local ip 00:31:15.002 21:32:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:15.002 21:32:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:15.002 21:32:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.002 21:32:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.002 21:32:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:15.002 21:32:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.002 21:32:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:15.002 21:32:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:15.002 21:32:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:15.002 21:32:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:15.002 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.002 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 nvme0n1 00:31:15.569 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.569 21:32:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.569 21:32:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:15.569 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.569 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.569 21:32:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.569 21:32:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.569 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.569 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.569 21:32:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:15.569 21:32:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:15.569 21:32:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:15.569 21:32:09 -- host/auth.sh@44 -- # digest=sha512 00:31:15.569 21:32:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:15.569 21:32:09 -- host/auth.sh@44 -- # keyid=1 00:31:15.569 21:32:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:15.569 21:32:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:15.569 21:32:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:15.569 21:32:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:15.569 21:32:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:31:15.569 21:32:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:15.569 21:32:09 -- host/auth.sh@68 -- # digest=sha512 00:31:15.569 21:32:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:15.569 21:32:09 -- host/auth.sh@68 -- # keyid=1 00:31:15.569 21:32:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:15.569 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.569 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:15.569 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.569 21:32:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:15.569 21:32:09 -- nvmf/common.sh@717 -- # local ip 00:31:15.569 21:32:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:15.569 21:32:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:15.569 21:32:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.569 21:32:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.569 21:32:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:15.569 21:32:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.569 21:32:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:15.569 21:32:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:15.569 21:32:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:15.569 21:32:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:15.569 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.569 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:31:16.137 nvme0n1 00:31:16.137 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.137 21:32:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.137 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.137 21:32:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:16.137 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:31:16.137 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.398 21:32:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.398 21:32:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.398 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.398 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:31:16.398 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.398 21:32:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:16.398 21:32:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:16.398 21:32:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:16.398 21:32:10 -- host/auth.sh@44 -- # digest=sha512 00:31:16.398 21:32:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:16.398 21:32:10 -- host/auth.sh@44 -- # keyid=2 00:31:16.398 21:32:10 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:16.398 21:32:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:16.398 21:32:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:16.398 21:32:10 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGM5NmI1YTk2ZWE5YzdmYzc3NmRlYzY0OTZkNjczMTnDSmIv: 00:31:16.398 21:32:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:31:16.398 21:32:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:16.398 21:32:10 -- host/auth.sh@68 -- # digest=sha512 00:31:16.398 21:32:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:16.398 21:32:10 -- host/auth.sh@68 -- # keyid=2 00:31:16.398 21:32:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:16.398 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.398 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:31:16.398 21:32:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.398 21:32:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:16.398 21:32:10 -- nvmf/common.sh@717 -- # local ip 00:31:16.398 21:32:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:16.398 21:32:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:16.398 21:32:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.398 21:32:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.398 21:32:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:16.398 21:32:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.398 21:32:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:16.398 21:32:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:16.398 21:32:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:16.398 21:32:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:16.398 21:32:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.398 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:31:16.969 nvme0n1 00:31:16.969 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.969 21:32:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.969 21:32:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:16.969 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.969 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.969 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.969 21:32:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.969 21:32:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.969 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.969 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.969 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.969 21:32:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:16.969 21:32:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:16.969 21:32:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:16.969 21:32:11 -- host/auth.sh@44 -- # digest=sha512 00:31:16.969 21:32:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:16.969 21:32:11 -- host/auth.sh@44 -- # keyid=3 00:31:16.969 21:32:11 -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:16.969 21:32:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:16.969 21:32:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:16.969 21:32:11 -- host/auth.sh@49 -- # echo DHHC-1:02:MjE2ZTQ4NGQ2NDJhZDYwNThlMTVmMGZlYTE3NTEzNmIyODVjMTA4NTdlYzdlODhiW7UdpQ==: 00:31:16.969 21:32:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:31:16.969 21:32:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:16.969 21:32:11 -- host/auth.sh@68 -- # digest=sha512 00:31:16.969 21:32:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:16.969 21:32:11 -- host/auth.sh@68 -- # keyid=3 00:31:16.969 21:32:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:16.969 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.969 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.969 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.969 21:32:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:16.969 21:32:11 -- nvmf/common.sh@717 -- # local ip 00:31:16.969 21:32:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:16.969 21:32:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:16.969 21:32:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.969 21:32:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.970 21:32:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:16.970 21:32:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.970 21:32:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:16.970 21:32:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:16.970 21:32:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:16.970 21:32:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:16.970 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.970 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 nvme0n1 00:31:17.537 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.537 21:32:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.537 21:32:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:17.537 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.537 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.537 21:32:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.537 21:32:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.537 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.537 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.537 21:32:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:17.537 21:32:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:17.537 21:32:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:17.537 21:32:11 -- host/auth.sh@44 -- # digest=sha512 00:31:17.537 21:32:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:17.537 21:32:11 -- host/auth.sh@44 -- # keyid=4 00:31:17.537 21:32:11 -- host/auth.sh@45 -- # key=DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:17.537 21:32:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:17.537 21:32:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:17.537 21:32:11 -- host/auth.sh@49 -- # echo DHHC-1:03:MTdiYmJhYjQ1ZjEyMmNhNmE4MjZmOTZhYWJkZjkzZTNmMGU1NTc1Zjc4NzRjMDY1Yjc3MDIzYzY2MDcwODlkZE1k1io=: 00:31:17.537 21:32:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:31:17.537 21:32:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:17.537 21:32:11 -- host/auth.sh@68 -- # digest=sha512 00:31:17.537 21:32:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:17.537 21:32:11 -- host/auth.sh@68 -- # keyid=4 00:31:17.537 21:32:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:17.537 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.537 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.537 21:32:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:17.537 21:32:11 -- nvmf/common.sh@717 -- # local ip 00:31:17.537 21:32:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:17.537 21:32:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:17.537 21:32:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.537 21:32:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.537 21:32:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:17.537 21:32:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.537 21:32:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:17.537 21:32:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:17.537 21:32:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:17.537 21:32:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.537 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.537 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:31:18.103 nvme0n1 00:31:18.103 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.103 21:32:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.103 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.103 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.103 21:32:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:18.103 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.103 21:32:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.103 21:32:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.103 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.103 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.103 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.103 21:32:12 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:18.104 21:32:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:18.104 21:32:12 -- host/auth.sh@44 -- # digest=sha256 00:31:18.104 21:32:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:18.104 21:32:12 -- host/auth.sh@44 -- # keyid=1 00:31:18.104 21:32:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:18.104 21:32:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:18.104 21:32:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:18.104 21:32:12 -- host/auth.sh@49 -- # echo DHHC-1:00:MjAzOTFjZDkyZTRkYzFjMWMyODkyODJlZjhkZTRjZDhkYzI0Mzk2YmVhNTNmOTg4snUxrw==: 00:31:18.104 21:32:12 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:18.104 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.104 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.104 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.104 21:32:12 -- host/auth.sh@119 -- # get_main_ns_ip 00:31:18.104 21:32:12 -- nvmf/common.sh@717 -- # local ip 00:31:18.104 21:32:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:18.104 21:32:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:18.104 21:32:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:18.104 21:32:12 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:18.104 21:32:12 -- common/autotest_common.sh@638 -- # local es=0 00:31:18.104 21:32:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:18.104 21:32:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:18.104 21:32:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:18.104 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.104 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.104 request: 00:31:18.104 { 00:31:18.104 "name": "nvme0", 00:31:18.104 "trtype": "tcp", 00:31:18.104 "traddr": "10.0.0.1", 00:31:18.104 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:18.104 "adrfam": "ipv4", 00:31:18.104 "trsvcid": "4420", 00:31:18.104 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:18.104 "method": "bdev_nvme_attach_controller", 00:31:18.104 "req_id": 1 00:31:18.104 } 00:31:18.104 Got JSON-RPC error response 00:31:18.104 response: 00:31:18.104 { 00:31:18.104 "code": -32602, 00:31:18.104 "message": "Invalid parameters" 00:31:18.104 } 00:31:18.104 21:32:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:18.104 21:32:12 -- common/autotest_common.sh@641 -- # es=1 00:31:18.104 21:32:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:18.104 21:32:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:18.104 21:32:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:18.104 21:32:12 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.104 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.104 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.104 21:32:12 -- host/auth.sh@121 -- # jq length 00:31:18.104 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.104 21:32:12 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:31:18.104 21:32:12 -- host/auth.sh@124 -- # get_main_ns_ip 00:31:18.104 21:32:12 -- nvmf/common.sh@717 -- # local ip 00:31:18.104 21:32:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:18.104 21:32:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:18.104 21:32:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:18.104 21:32:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:18.104 21:32:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:18.104 21:32:12 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:18.104 21:32:12 -- common/autotest_common.sh@638 -- # local es=0 00:31:18.104 21:32:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:18.104 21:32:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:18.104 21:32:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:18.104 21:32:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:18.104 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.104 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.364 request: 00:31:18.364 { 00:31:18.364 "name": "nvme0", 00:31:18.364 "trtype": "tcp", 00:31:18.364 "traddr": "10.0.0.1", 00:31:18.364 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:18.364 "adrfam": "ipv4", 00:31:18.364 "trsvcid": "4420", 00:31:18.364 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:18.364 "dhchap_key": "key2", 00:31:18.364 "method": "bdev_nvme_attach_controller", 00:31:18.364 "req_id": 1 00:31:18.364 } 00:31:18.364 Got JSON-RPC error response 00:31:18.364 response: 00:31:18.364 { 00:31:18.364 "code": -32602, 00:31:18.364 "message": "Invalid parameters" 00:31:18.364 } 00:31:18.364 21:32:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:18.364 21:32:12 -- common/autotest_common.sh@641 -- # es=1 00:31:18.364 21:32:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:18.364 21:32:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:18.364 21:32:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:18.364 21:32:12 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.364 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.364 21:32:12 -- host/auth.sh@127 -- # jq length 00:31:18.364 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.364 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.364 21:32:12 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:31:18.364 21:32:12 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:31:18.364 21:32:12 -- host/auth.sh@130 -- # cleanup 00:31:18.364 21:32:12 -- host/auth.sh@24 -- # nvmftestfini 00:31:18.364 21:32:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:18.364 21:32:12 -- nvmf/common.sh@117 -- # sync 00:31:18.364 21:32:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:18.364 21:32:12 -- nvmf/common.sh@120 -- # set +e 00:31:18.364 21:32:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:18.364 21:32:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:18.364 rmmod nvme_tcp 00:31:18.364 rmmod nvme_fabrics 00:31:18.364 21:32:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:18.364 21:32:12 -- nvmf/common.sh@124 -- # set -e 00:31:18.364 21:32:12 -- nvmf/common.sh@125 -- # return 0 00:31:18.364 21:32:12 -- nvmf/common.sh@478 -- # '[' -n 1637164 ']' 00:31:18.364 21:32:12 -- nvmf/common.sh@479 -- # killprocess 1637164 00:31:18.364 21:32:12 -- common/autotest_common.sh@936 -- # '[' -z 1637164 ']' 00:31:18.364 21:32:12 -- common/autotest_common.sh@940 -- # kill -0 1637164 00:31:18.364 21:32:12 -- common/autotest_common.sh@941 -- # uname 00:31:18.364 21:32:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:18.364 21:32:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1637164 00:31:18.364 21:32:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:18.364 21:32:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:18.364 21:32:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1637164' 00:31:18.364 killing process with pid 1637164 00:31:18.364 21:32:12 -- common/autotest_common.sh@955 -- # kill 1637164 00:31:18.364 21:32:12 -- common/autotest_common.sh@960 -- # wait 1637164 00:31:18.936 21:32:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:18.936 21:32:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:18.936 21:32:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:18.936 21:32:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:18.936 21:32:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:18.936 21:32:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.936 21:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.936 21:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.851 21:32:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.851 21:32:14 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:20.851 21:32:14 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:20.851 21:32:14 -- host/auth.sh@27 -- # clean_kernel_target 00:31:20.851 21:32:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:20.851 21:32:14 -- nvmf/common.sh@675 -- # echo 0 00:31:20.851 21:32:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.851 21:32:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:20.851 21:32:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:20.851 21:32:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:20.851 21:32:15 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:20.851 21:32:15 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:20.851 21:32:15 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:31:24.161 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.161 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:31:24.161 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:31:24.734 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:31:24.995 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:31:25.257 21:32:19 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fGX /tmp/spdk.key-null.VwQ /tmp/spdk.key-sha256.uQl /tmp/spdk.key-sha384.KEp /tmp/spdk.key-sha512.oHd /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:31:25.257 21:32:19 -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:31:27.801 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:31:27.801 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:31:27.801 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:31:27.801 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:31:28.059 00:31:28.059 real 0m46.012s 00:31:28.059 user 0m38.661s 00:31:28.059 sys 0m11.214s 00:31:28.059 21:32:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:28.059 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 ************************************ 00:31:28.059 END TEST nvmf_auth 00:31:28.059 ************************************ 00:31:28.059 21:32:22 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:31:28.059 21:32:22 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:28.059 21:32:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:28.059 21:32:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:28.059 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:28.059 ************************************ 00:31:28.059 START TEST nvmf_digest 00:31:28.059 ************************************ 00:31:28.059 21:32:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:28.318 * Looking for test storage... 00:31:28.318 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:31:28.318 21:32:22 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.319 21:32:22 -- nvmf/common.sh@7 -- # uname -s 00:31:28.319 21:32:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.319 21:32:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.319 21:32:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.319 21:32:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.319 21:32:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.319 21:32:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.319 21:32:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.319 21:32:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.319 21:32:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.319 21:32:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.319 21:32:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:28.319 21:32:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:31:28.319 21:32:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.319 21:32:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.319 21:32:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:28.319 21:32:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.319 21:32:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:28.319 21:32:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.319 21:32:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.319 21:32:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.319 21:32:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.319 21:32:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.319 21:32:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.319 21:32:22 -- paths/export.sh@5 -- # export PATH 00:31:28.319 21:32:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.319 21:32:22 -- nvmf/common.sh@47 -- # : 0 00:31:28.319 21:32:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.319 21:32:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.319 21:32:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.319 21:32:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.319 21:32:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.319 21:32:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.319 21:32:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.319 21:32:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.319 21:32:22 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:28.319 21:32:22 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:28.319 21:32:22 -- host/digest.sh@16 -- # runtime=2 00:31:28.319 21:32:22 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:28.319 21:32:22 -- host/digest.sh@138 -- # nvmftestinit 00:31:28.319 21:32:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:28.319 21:32:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.319 21:32:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:28.319 21:32:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:28.319 21:32:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:28.319 21:32:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.319 21:32:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.319 21:32:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.319 21:32:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:31:28.319 21:32:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:31:28.319 21:32:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:28.319 21:32:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.594 21:32:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:33.594 21:32:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:33.594 21:32:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:33.594 21:32:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:33.594 21:32:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:33.594 21:32:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:33.594 21:32:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:33.594 21:32:27 -- nvmf/common.sh@295 -- # net_devs=() 00:31:33.594 21:32:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:33.594 21:32:27 -- nvmf/common.sh@296 -- # e810=() 00:31:33.594 21:32:27 -- nvmf/common.sh@296 -- # local -ga e810 00:31:33.594 21:32:27 -- nvmf/common.sh@297 -- # x722=() 00:31:33.594 21:32:27 -- nvmf/common.sh@297 -- # local -ga x722 00:31:33.594 21:32:27 -- nvmf/common.sh@298 -- # mlx=() 00:31:33.594 21:32:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:33.594 21:32:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.594 21:32:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:33.594 21:32:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.594 21:32:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:31:33.594 Found 0000:27:00.0 (0x8086 - 0x159b) 00:31:33.594 21:32:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.594 21:32:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:31:33.594 Found 0000:27:00.1 (0x8086 - 0x159b) 00:31:33.594 21:32:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.594 21:32:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.594 21:32:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.594 21:32:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:31:33.594 Found net devices under 0000:27:00.0: cvl_0_0 00:31:33.594 21:32:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.594 21:32:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.594 21:32:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.594 21:32:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.594 21:32:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:31:33.594 Found net devices under 0000:27:00.1: cvl_0_1 00:31:33.594 21:32:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.594 21:32:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:31:33.594 21:32:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:31:33.594 21:32:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:31:33.594 21:32:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.594 21:32:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.594 21:32:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.594 21:32:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:33.594 21:32:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.594 21:32:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.594 21:32:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:33.594 21:32:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.594 21:32:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.594 21:32:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:33.594 21:32:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:33.594 21:32:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.594 21:32:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.594 21:32:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.594 21:32:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.594 21:32:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:33.594 21:32:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.594 21:32:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.594 21:32:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.594 21:32:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:33.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:31:33.594 00:31:33.594 --- 10.0.0.2 ping statistics --- 00:31:33.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.595 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:31:33.595 21:32:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.471 ms 00:31:33.595 00:31:33.595 --- 10.0.0.1 ping statistics --- 00:31:33.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.595 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:31:33.595 21:32:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.595 21:32:27 -- nvmf/common.sh@411 -- # return 0 00:31:33.595 21:32:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:33.595 21:32:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.595 21:32:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:33.595 21:32:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:33.595 21:32:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.595 21:32:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:33.595 21:32:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:33.595 21:32:27 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:33.595 21:32:27 -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:31:33.595 21:32:27 -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:31:33.595 21:32:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:33.595 21:32:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:33.595 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.856 ************************************ 00:31:33.856 START TEST nvmf_digest_dsa_initiator 00:31:33.856 ************************************ 00:31:33.856 21:32:27 -- common/autotest_common.sh@1111 -- # run_digest dsa_initiator 00:31:33.856 21:32:27 -- host/digest.sh@120 -- # local dsa_initiator 00:31:33.856 21:32:27 -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:33.856 21:32:27 -- host/digest.sh@121 -- # dsa_initiator=true 00:31:33.856 21:32:27 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:33.856 21:32:27 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:33.856 21:32:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:33.856 21:32:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:33.856 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.856 21:32:27 -- nvmf/common.sh@470 -- # nvmfpid=1651651 00:31:33.856 21:32:27 -- nvmf/common.sh@471 -- # waitforlisten 1651651 00:31:33.856 21:32:27 -- common/autotest_common.sh@817 -- # '[' -z 1651651 ']' 00:31:33.856 21:32:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.856 21:32:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:33.856 21:32:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.856 21:32:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:33.856 21:32:27 -- common/autotest_common.sh@10 -- # set +x 00:31:33.856 21:32:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:33.856 [2024-04-23 21:32:28.007077] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:31:33.856 [2024-04-23 21:32:28.007190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.856 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.116 [2024-04-23 21:32:28.141514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.116 [2024-04-23 21:32:28.243455] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.116 [2024-04-23 21:32:28.243492] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.116 [2024-04-23 21:32:28.243502] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.116 [2024-04-23 21:32:28.243512] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.116 [2024-04-23 21:32:28.243520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.116 [2024-04-23 21:32:28.243555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.684 21:32:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:34.684 21:32:28 -- common/autotest_common.sh@850 -- # return 0 00:31:34.684 21:32:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:34.684 21:32:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:34.684 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:31:34.684 21:32:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.684 21:32:28 -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:31:34.684 21:32:28 -- host/digest.sh@126 -- # common_target_config 00:31:34.684 21:32:28 -- host/digest.sh@43 -- # rpc_cmd 00:31:34.684 21:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.684 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:31:34.684 null0 00:31:34.684 [2024-04-23 21:32:28.892089] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.684 [2024-04-23 21:32:28.916234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.684 21:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.684 21:32:28 -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:31:34.684 21:32:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:34.684 21:32:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:34.684 21:32:28 -- host/digest.sh@80 -- # rw=randread 00:31:34.684 21:32:28 -- host/digest.sh@80 -- # bs=4096 00:31:34.684 21:32:28 -- host/digest.sh@80 -- # qd=128 00:31:34.684 21:32:28 -- host/digest.sh@80 -- # scan_dsa=true 00:31:34.684 21:32:28 -- host/digest.sh@83 -- # bperfpid=1651959 00:31:34.684 21:32:28 -- host/digest.sh@84 -- # waitforlisten 1651959 /var/tmp/bperf.sock 00:31:34.684 21:32:28 -- common/autotest_common.sh@817 -- # '[' -z 1651959 ']' 00:31:34.684 21:32:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:34.684 21:32:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:34.684 21:32:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:34.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:34.684 21:32:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:34.684 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:31:34.684 21:32:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:34.943 [2024-04-23 21:32:28.989178] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:31:34.943 [2024-04-23 21:32:28.989285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651959 ] 00:31:34.943 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.943 [2024-04-23 21:32:29.101731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.943 [2024-04-23 21:32:29.190224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.537 21:32:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:35.537 21:32:29 -- common/autotest_common.sh@850 -- # return 0 00:31:35.537 21:32:29 -- host/digest.sh@86 -- # true 00:31:35.537 21:32:29 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:31:35.537 21:32:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:31:35.798 [2024-04-23 21:32:29.814721] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:31:35.798 21:32:29 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:35.798 21:32:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:41.069 21:32:35 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.069 21:32:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.327 nvme0n1 00:31:41.327 21:32:35 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:41.327 21:32:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.327 Running I/O for 2 seconds... 00:31:43.321 00:31:43.321 Latency(us) 00:31:43.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:43.321 nvme0n1 : 2.00 23827.45 93.08 0.00 0.00 5365.17 2793.90 12003.44 00:31:43.321 =================================================================================================================== 00:31:43.321 Total : 23827.45 93.08 0.00 0.00 5365.17 2793.90 12003.44 00:31:43.321 0 00:31:43.321 21:32:37 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:43.321 21:32:37 -- host/digest.sh@93 -- # get_accel_stats 00:31:43.321 21:32:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:43.321 21:32:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:43.321 | select(.opcode=="crc32c") 00:31:43.321 | "\(.module_name) \(.executed)"' 00:31:43.321 21:32:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:43.579 21:32:37 -- host/digest.sh@94 -- # true 00:31:43.579 21:32:37 -- host/digest.sh@94 -- # exp_module=dsa 00:31:43.579 21:32:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:43.579 21:32:37 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:31:43.579 21:32:37 -- host/digest.sh@98 -- # killprocess 1651959 00:31:43.579 21:32:37 -- common/autotest_common.sh@936 -- # '[' -z 1651959 ']' 00:31:43.579 21:32:37 -- common/autotest_common.sh@940 -- # kill -0 1651959 00:31:43.579 21:32:37 -- common/autotest_common.sh@941 -- # uname 00:31:43.579 21:32:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:43.579 21:32:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1651959 00:31:43.579 21:32:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:43.579 21:32:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:43.579 21:32:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1651959' 00:31:43.579 killing process with pid 1651959 00:31:43.579 21:32:37 -- common/autotest_common.sh@955 -- # kill 1651959 00:31:43.579 Received shutdown signal, test time was about 2.000000 seconds 00:31:43.579 00:31:43.580 Latency(us) 00:31:43.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.580 =================================================================================================================== 00:31:43.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.580 21:32:37 -- common/autotest_common.sh@960 -- # wait 1651959 00:31:44.964 21:32:39 -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:31:44.964 21:32:39 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:44.964 21:32:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:44.964 21:32:39 -- host/digest.sh@80 -- # rw=randread 00:31:44.964 21:32:39 -- host/digest.sh@80 -- # bs=131072 00:31:44.964 21:32:39 -- host/digest.sh@80 -- # qd=16 00:31:44.964 21:32:39 -- host/digest.sh@80 -- # scan_dsa=true 00:31:44.964 21:32:39 -- host/digest.sh@83 -- # bperfpid=1653873 00:31:44.964 21:32:39 -- host/digest.sh@84 -- # waitforlisten 1653873 /var/tmp/bperf.sock 00:31:44.964 21:32:39 -- common/autotest_common.sh@817 -- # '[' -z 1653873 ']' 00:31:44.964 21:32:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:44.964 21:32:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:44.964 21:32:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:44.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:44.964 21:32:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:44.964 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:31:44.964 21:32:39 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:44.964 [2024-04-23 21:32:39.194188] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:31:44.964 [2024-04-23 21:32:39.194297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653873 ] 00:31:44.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:44.964 Zero copy mechanism will not be used. 00:31:45.223 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.223 [2024-04-23 21:32:39.313902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.223 [2024-04-23 21:32:39.405465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.790 21:32:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:45.790 21:32:39 -- common/autotest_common.sh@850 -- # return 0 00:31:45.790 21:32:39 -- host/digest.sh@86 -- # true 00:31:45.790 21:32:39 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:31:45.790 21:32:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:31:45.790 [2024-04-23 21:32:40.006010] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:31:45.790 21:32:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:45.790 21:32:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:51.068 21:32:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.068 21:32:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.327 nvme0n1 00:31:51.327 21:32:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:51.327 21:32:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:51.327 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:51.327 Zero copy mechanism will not be used. 00:31:51.327 Running I/O for 2 seconds... 00:31:53.869 00:31:53.869 Latency(us) 00:31:53.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.869 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:53.869 nvme0n1 : 2.00 4105.10 513.14 0.00 0.00 3895.22 3069.84 8795.62 00:31:53.869 =================================================================================================================== 00:31:53.869 Total : 4105.10 513.14 0.00 0.00 3895.22 3069.84 8795.62 00:31:53.869 0 00:31:53.869 21:32:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:53.869 21:32:47 -- host/digest.sh@93 -- # get_accel_stats 00:31:53.869 21:32:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:53.869 | select(.opcode=="crc32c") 00:31:53.869 | "\(.module_name) \(.executed)"' 00:31:53.869 21:32:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:53.869 21:32:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:53.869 21:32:47 -- host/digest.sh@94 -- # true 00:31:53.869 21:32:47 -- host/digest.sh@94 -- # exp_module=dsa 00:31:53.869 21:32:47 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:53.869 21:32:47 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:31:53.869 21:32:47 -- host/digest.sh@98 -- # killprocess 1653873 00:31:53.869 21:32:47 -- common/autotest_common.sh@936 -- # '[' -z 1653873 ']' 00:31:53.869 21:32:47 -- common/autotest_common.sh@940 -- # kill -0 1653873 00:31:53.869 21:32:47 -- common/autotest_common.sh@941 -- # uname 00:31:53.869 21:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:53.869 21:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653873 00:31:53.869 21:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:53.869 21:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:53.869 21:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653873' 00:31:53.869 killing process with pid 1653873 00:31:53.869 21:32:47 -- common/autotest_common.sh@955 -- # kill 1653873 00:31:53.869 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.869 00:31:53.869 Latency(us) 00:31:53.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.869 =================================================================================================================== 00:31:53.869 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.869 21:32:47 -- common/autotest_common.sh@960 -- # wait 1653873 00:31:55.246 21:32:49 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:31:55.246 21:32:49 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:55.246 21:32:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:55.246 21:32:49 -- host/digest.sh@80 -- # rw=randwrite 00:31:55.246 21:32:49 -- host/digest.sh@80 -- # bs=4096 00:31:55.246 21:32:49 -- host/digest.sh@80 -- # qd=128 00:31:55.246 21:32:49 -- host/digest.sh@80 -- # scan_dsa=true 00:31:55.246 21:32:49 -- host/digest.sh@83 -- # bperfpid=1656228 00:31:55.246 21:32:49 -- host/digest.sh@84 -- # waitforlisten 1656228 /var/tmp/bperf.sock 00:31:55.246 21:32:49 -- common/autotest_common.sh@817 -- # '[' -z 1656228 ']' 00:31:55.246 21:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:55.246 21:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:55.246 21:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:55.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:55.246 21:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:55.246 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:31:55.246 21:32:49 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:55.246 [2024-04-23 21:32:49.262531] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:31:55.246 [2024-04-23 21:32:49.262656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656228 ] 00:31:55.246 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.246 [2024-04-23 21:32:49.365116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.246 [2024-04-23 21:32:49.453602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.817 21:32:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:55.817 21:32:49 -- common/autotest_common.sh@850 -- # return 0 00:31:55.817 21:32:49 -- host/digest.sh@86 -- # true 00:31:55.817 21:32:49 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:31:55.817 21:32:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:31:56.078 [2024-04-23 21:32:50.118146] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:31:56.078 21:32:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:56.078 21:32:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:01.369 21:32:55 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.369 21:32:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.369 nvme0n1 00:32:01.369 21:32:55 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:01.369 21:32:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.369 Running I/O for 2 seconds... 00:32:03.904 00:32:03.904 Latency(us) 00:32:03.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.904 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.904 nvme0n1 : 2.00 26285.02 102.68 0.00 0.00 4861.96 4346.07 13728.07 00:32:03.904 =================================================================================================================== 00:32:03.904 Total : 26285.02 102.68 0.00 0.00 4861.96 4346.07 13728.07 00:32:03.904 0 00:32:03.904 21:32:57 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:03.904 21:32:57 -- host/digest.sh@93 -- # get_accel_stats 00:32:03.904 21:32:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:03.904 21:32:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:03.904 21:32:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:03.904 | select(.opcode=="crc32c") 00:32:03.904 | "\(.module_name) \(.executed)"' 00:32:03.904 21:32:57 -- host/digest.sh@94 -- # true 00:32:03.904 21:32:57 -- host/digest.sh@94 -- # exp_module=dsa 00:32:03.904 21:32:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:03.904 21:32:57 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:32:03.904 21:32:57 -- host/digest.sh@98 -- # killprocess 1656228 00:32:03.904 21:32:57 -- common/autotest_common.sh@936 -- # '[' -z 1656228 ']' 00:32:03.904 21:32:57 -- common/autotest_common.sh@940 -- # kill -0 1656228 00:32:03.904 21:32:57 -- common/autotest_common.sh@941 -- # uname 00:32:03.904 21:32:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:03.904 21:32:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1656228 00:32:03.904 21:32:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:03.904 21:32:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:03.904 21:32:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1656228' 00:32:03.904 killing process with pid 1656228 00:32:03.904 21:32:57 -- common/autotest_common.sh@955 -- # kill 1656228 00:32:03.904 Received shutdown signal, test time was about 2.000000 seconds 00:32:03.904 00:32:03.904 Latency(us) 00:32:03.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.904 =================================================================================================================== 00:32:03.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.904 21:32:57 -- common/autotest_common.sh@960 -- # wait 1656228 00:32:05.287 21:32:59 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:32:05.287 21:32:59 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:05.287 21:32:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:05.287 21:32:59 -- host/digest.sh@80 -- # rw=randwrite 00:32:05.287 21:32:59 -- host/digest.sh@80 -- # bs=131072 00:32:05.287 21:32:59 -- host/digest.sh@80 -- # qd=16 00:32:05.287 21:32:59 -- host/digest.sh@80 -- # scan_dsa=true 00:32:05.287 21:32:59 -- host/digest.sh@83 -- # bperfpid=1658237 00:32:05.287 21:32:59 -- host/digest.sh@84 -- # waitforlisten 1658237 /var/tmp/bperf.sock 00:32:05.287 21:32:59 -- common/autotest_common.sh@817 -- # '[' -z 1658237 ']' 00:32:05.287 21:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.287 21:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:05.287 21:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.287 21:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:05.287 21:32:59 -- common/autotest_common.sh@10 -- # set +x 00:32:05.287 21:32:59 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:05.287 [2024-04-23 21:32:59.324158] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:05.287 [2024-04-23 21:32:59.324275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658237 ] 00:32:05.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:05.287 Zero copy mechanism will not be used. 00:32:05.287 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.287 [2024-04-23 21:32:59.448625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.287 [2024-04-23 21:32:59.544224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.854 21:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:05.854 21:33:00 -- common/autotest_common.sh@850 -- # return 0 00:32:05.854 21:33:00 -- host/digest.sh@86 -- # true 00:32:05.854 21:33:00 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:32:05.854 21:33:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:32:06.112 [2024-04-23 21:33:00.184795] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:32:06.112 21:33:00 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:06.112 21:33:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:11.391 21:33:05 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.392 21:33:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:11.650 nvme0n1 00:32:11.650 21:33:05 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:11.650 21:33:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:11.650 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:11.650 Zero copy mechanism will not be used. 00:32:11.650 Running I/O for 2 seconds... 00:32:14.188 00:32:14.188 Latency(us) 00:32:14.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.188 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:14.188 nvme0n1 : 2.01 2612.96 326.62 0.00 0.00 6112.90 3225.06 10209.82 00:32:14.188 =================================================================================================================== 00:32:14.188 Total : 2612.96 326.62 0.00 0.00 6112.90 3225.06 10209.82 00:32:14.188 0 00:32:14.188 21:33:07 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:14.188 21:33:07 -- host/digest.sh@93 -- # get_accel_stats 00:32:14.188 21:33:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:14.188 21:33:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:14.188 | select(.opcode=="crc32c") 00:32:14.188 | "\(.module_name) \(.executed)"' 00:32:14.189 21:33:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:14.189 21:33:07 -- host/digest.sh@94 -- # true 00:32:14.189 21:33:07 -- host/digest.sh@94 -- # exp_module=dsa 00:32:14.189 21:33:07 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:14.189 21:33:07 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:32:14.189 21:33:07 -- host/digest.sh@98 -- # killprocess 1658237 00:32:14.189 21:33:07 -- common/autotest_common.sh@936 -- # '[' -z 1658237 ']' 00:32:14.189 21:33:07 -- common/autotest_common.sh@940 -- # kill -0 1658237 00:32:14.189 21:33:07 -- common/autotest_common.sh@941 -- # uname 00:32:14.189 21:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:14.189 21:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1658237 00:32:14.189 21:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:14.189 21:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:14.189 21:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1658237' 00:32:14.189 killing process with pid 1658237 00:32:14.189 21:33:08 -- common/autotest_common.sh@955 -- # kill 1658237 00:32:14.189 Received shutdown signal, test time was about 2.000000 seconds 00:32:14.189 00:32:14.189 Latency(us) 00:32:14.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.189 =================================================================================================================== 00:32:14.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.189 21:33:08 -- common/autotest_common.sh@960 -- # wait 1658237 00:32:15.565 21:33:09 -- host/digest.sh@132 -- # killprocess 1651651 00:32:15.565 21:33:09 -- common/autotest_common.sh@936 -- # '[' -z 1651651 ']' 00:32:15.566 21:33:09 -- common/autotest_common.sh@940 -- # kill -0 1651651 00:32:15.566 21:33:09 -- common/autotest_common.sh@941 -- # uname 00:32:15.566 21:33:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:15.566 21:33:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1651651 00:32:15.566 21:33:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:15.566 21:33:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:15.566 21:33:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1651651' 00:32:15.566 killing process with pid 1651651 00:32:15.566 21:33:09 -- common/autotest_common.sh@955 -- # kill 1651651 00:32:15.566 21:33:09 -- common/autotest_common.sh@960 -- # wait 1651651 00:32:15.827 00:32:15.827 real 0m42.069s 00:32:15.827 user 1m2.558s 00:32:15.827 sys 0m3.623s 00:32:15.827 21:33:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:15.827 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:32:15.827 ************************************ 00:32:15.827 END TEST nvmf_digest_dsa_initiator 00:32:15.827 ************************************ 00:32:15.827 21:33:10 -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:32:15.827 21:33:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:15.827 21:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:15.827 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:16.089 ************************************ 00:32:16.089 START TEST nvmf_digest_dsa_target 00:32:16.089 ************************************ 00:32:16.089 21:33:10 -- common/autotest_common.sh@1111 -- # run_digest dsa_target 00:32:16.089 21:33:10 -- host/digest.sh@120 -- # local dsa_initiator 00:32:16.089 21:33:10 -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:16.089 21:33:10 -- host/digest.sh@121 -- # dsa_initiator=false 00:32:16.089 21:33:10 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:16.089 21:33:10 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:16.089 21:33:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:16.089 21:33:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:16.089 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:16.089 21:33:10 -- nvmf/common.sh@470 -- # nvmfpid=1661800 00:32:16.089 21:33:10 -- nvmf/common.sh@471 -- # waitforlisten 1661800 00:32:16.089 21:33:10 -- common/autotest_common.sh@817 -- # '[' -z 1661800 ']' 00:32:16.089 21:33:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.089 21:33:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:16.089 21:33:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.089 21:33:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:16.089 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:16.089 21:33:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:16.089 [2024-04-23 21:33:10.225520] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:16.089 [2024-04-23 21:33:10.225654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.089 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.351 [2024-04-23 21:33:10.367079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.351 [2024-04-23 21:33:10.460501] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.351 [2024-04-23 21:33:10.460549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.351 [2024-04-23 21:33:10.460559] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.351 [2024-04-23 21:33:10.460570] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.351 [2024-04-23 21:33:10.460577] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.351 [2024-04-23 21:33:10.460611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.919 21:33:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:16.919 21:33:10 -- common/autotest_common.sh@850 -- # return 0 00:32:16.919 21:33:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:16.920 21:33:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:16.920 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:16.920 21:33:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.920 21:33:10 -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:32:16.920 21:33:10 -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:32:16.920 21:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.920 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:16.920 [2024-04-23 21:33:10.945098] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:32:16.920 21:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:16.920 21:33:10 -- host/digest.sh@126 -- # common_target_config 00:32:16.920 21:33:10 -- host/digest.sh@43 -- # rpc_cmd 00:32:16.920 21:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:16.920 21:33:10 -- common/autotest_common.sh@10 -- # set +x 00:32:22.199 null0 00:32:22.199 [2024-04-23 21:33:16.050262] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.199 [2024-04-23 21:33:16.077198] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.199 21:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:22.199 21:33:16 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:22.199 21:33:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:22.199 21:33:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:22.199 21:33:16 -- host/digest.sh@80 -- # rw=randread 00:32:22.199 21:33:16 -- host/digest.sh@80 -- # bs=4096 00:32:22.199 21:33:16 -- host/digest.sh@80 -- # qd=128 00:32:22.199 21:33:16 -- host/digest.sh@80 -- # scan_dsa=false 00:32:22.199 21:33:16 -- host/digest.sh@83 -- # bperfpid=1663319 00:32:22.199 21:33:16 -- host/digest.sh@84 -- # waitforlisten 1663319 /var/tmp/bperf.sock 00:32:22.199 21:33:16 -- common/autotest_common.sh@817 -- # '[' -z 1663319 ']' 00:32:22.199 21:33:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:22.199 21:33:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:22.199 21:33:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:22.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:22.199 21:33:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:22.199 21:33:16 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:22.199 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:32:22.199 [2024-04-23 21:33:16.168035] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:22.199 [2024-04-23 21:33:16.168187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663319 ] 00:32:22.199 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.199 [2024-04-23 21:33:16.302948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.199 [2024-04-23 21:33:16.393992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.765 21:33:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:22.765 21:33:16 -- common/autotest_common.sh@850 -- # return 0 00:32:22.765 21:33:16 -- host/digest.sh@86 -- # false 00:32:22.765 21:33:16 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:22.765 21:33:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:23.025 21:33:17 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:23.025 21:33:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:23.286 nvme0n1 00:32:23.286 21:33:17 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:23.286 21:33:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:23.286 Running I/O for 2 seconds... 00:32:25.293 00:32:25.293 Latency(us) 00:32:25.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:25.293 nvme0n1 : 2.00 22984.80 89.78 0.00 0.00 5564.22 2035.07 16142.55 00:32:25.293 =================================================================================================================== 00:32:25.293 Total : 22984.80 89.78 0.00 0.00 5564.22 2035.07 16142.55 00:32:25.293 0 00:32:25.293 21:33:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:25.293 21:33:19 -- host/digest.sh@93 -- # get_accel_stats 00:32:25.293 21:33:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:25.293 21:33:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:25.293 | select(.opcode=="crc32c") 00:32:25.293 | "\(.module_name) \(.executed)"' 00:32:25.293 21:33:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:25.552 21:33:19 -- host/digest.sh@94 -- # false 00:32:25.552 21:33:19 -- host/digest.sh@94 -- # exp_module=software 00:32:25.552 21:33:19 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:25.552 21:33:19 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:25.552 21:33:19 -- host/digest.sh@98 -- # killprocess 1663319 00:32:25.552 21:33:19 -- common/autotest_common.sh@936 -- # '[' -z 1663319 ']' 00:32:25.552 21:33:19 -- common/autotest_common.sh@940 -- # kill -0 1663319 00:32:25.552 21:33:19 -- common/autotest_common.sh@941 -- # uname 00:32:25.552 21:33:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:25.552 21:33:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1663319 00:32:25.552 21:33:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:25.552 21:33:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:25.552 21:33:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1663319' 00:32:25.552 killing process with pid 1663319 00:32:25.552 21:33:19 -- common/autotest_common.sh@955 -- # kill 1663319 00:32:25.552 Received shutdown signal, test time was about 2.000000 seconds 00:32:25.552 00:32:25.552 Latency(us) 00:32:25.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.552 =================================================================================================================== 00:32:25.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:25.552 21:33:19 -- common/autotest_common.sh@960 -- # wait 1663319 00:32:26.121 21:33:20 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:26.121 21:33:20 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:26.121 21:33:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:26.121 21:33:20 -- host/digest.sh@80 -- # rw=randread 00:32:26.121 21:33:20 -- host/digest.sh@80 -- # bs=131072 00:32:26.121 21:33:20 -- host/digest.sh@80 -- # qd=16 00:32:26.121 21:33:20 -- host/digest.sh@80 -- # scan_dsa=false 00:32:26.121 21:33:20 -- host/digest.sh@83 -- # bperfpid=1664481 00:32:26.121 21:33:20 -- host/digest.sh@84 -- # waitforlisten 1664481 /var/tmp/bperf.sock 00:32:26.121 21:33:20 -- common/autotest_common.sh@817 -- # '[' -z 1664481 ']' 00:32:26.121 21:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.121 21:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:26.121 21:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.121 21:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:26.121 21:33:20 -- common/autotest_common.sh@10 -- # set +x 00:32:26.121 21:33:20 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:26.121 [2024-04-23 21:33:20.165140] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:26.121 [2024-04-23 21:33:20.165252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664481 ] 00:32:26.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.121 Zero copy mechanism will not be used. 00:32:26.121 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.121 [2024-04-23 21:33:20.280489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.121 [2024-04-23 21:33:20.370654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.688 21:33:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:26.688 21:33:20 -- common/autotest_common.sh@850 -- # return 0 00:32:26.688 21:33:20 -- host/digest.sh@86 -- # false 00:32:26.688 21:33:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:26.688 21:33:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:26.946 21:33:21 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.946 21:33:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.205 nvme0n1 00:32:27.205 21:33:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:27.205 21:33:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.464 Zero copy mechanism will not be used. 00:32:27.464 Running I/O for 2 seconds... 00:32:29.370 00:32:29.370 Latency(us) 00:32:29.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.370 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:29.370 nvme0n1 : 2.00 3966.07 495.76 0.00 0.00 4032.14 3138.83 16763.42 00:32:29.370 =================================================================================================================== 00:32:29.370 Total : 3966.07 495.76 0.00 0.00 4032.14 3138.83 16763.42 00:32:29.370 0 00:32:29.370 21:33:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:29.370 21:33:23 -- host/digest.sh@93 -- # get_accel_stats 00:32:29.370 21:33:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:29.370 21:33:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:29.370 | select(.opcode=="crc32c") 00:32:29.370 | "\(.module_name) \(.executed)"' 00:32:29.370 21:33:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:29.629 21:33:23 -- host/digest.sh@94 -- # false 00:32:29.629 21:33:23 -- host/digest.sh@94 -- # exp_module=software 00:32:29.629 21:33:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:29.629 21:33:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:29.629 21:33:23 -- host/digest.sh@98 -- # killprocess 1664481 00:32:29.629 21:33:23 -- common/autotest_common.sh@936 -- # '[' -z 1664481 ']' 00:32:29.629 21:33:23 -- common/autotest_common.sh@940 -- # kill -0 1664481 00:32:29.629 21:33:23 -- common/autotest_common.sh@941 -- # uname 00:32:29.629 21:33:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:29.629 21:33:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1664481 00:32:29.629 21:33:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:29.629 21:33:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:29.629 21:33:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1664481' 00:32:29.629 killing process with pid 1664481 00:32:29.629 21:33:23 -- common/autotest_common.sh@955 -- # kill 1664481 00:32:29.629 Received shutdown signal, test time was about 2.000000 seconds 00:32:29.629 00:32:29.629 Latency(us) 00:32:29.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.629 =================================================================================================================== 00:32:29.629 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.629 21:33:23 -- common/autotest_common.sh@960 -- # wait 1664481 00:32:29.888 21:33:24 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:29.888 21:33:24 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:29.888 21:33:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:29.888 21:33:24 -- host/digest.sh@80 -- # rw=randwrite 00:32:29.888 21:33:24 -- host/digest.sh@80 -- # bs=4096 00:32:29.888 21:33:24 -- host/digest.sh@80 -- # qd=128 00:32:29.888 21:33:24 -- host/digest.sh@80 -- # scan_dsa=false 00:32:29.888 21:33:24 -- host/digest.sh@83 -- # bperfpid=1665433 00:32:29.888 21:33:24 -- host/digest.sh@84 -- # waitforlisten 1665433 /var/tmp/bperf.sock 00:32:29.888 21:33:24 -- common/autotest_common.sh@817 -- # '[' -z 1665433 ']' 00:32:29.888 21:33:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.888 21:33:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:29.888 21:33:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.888 21:33:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:29.888 21:33:24 -- common/autotest_common.sh@10 -- # set +x 00:32:29.888 21:33:24 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:30.149 [2024-04-23 21:33:24.164057] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:30.149 [2024-04-23 21:33:24.164176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665433 ] 00:32:30.149 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.149 [2024-04-23 21:33:24.283820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.149 [2024-04-23 21:33:24.379499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.716 21:33:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:30.716 21:33:24 -- common/autotest_common.sh@850 -- # return 0 00:32:30.716 21:33:24 -- host/digest.sh@86 -- # false 00:32:30.716 21:33:24 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:30.716 21:33:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:30.974 21:33:25 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.974 21:33:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.232 nvme0n1 00:32:31.232 21:33:25 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:31.232 21:33:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.491 Running I/O for 2 seconds... 00:32:33.398 00:32:33.398 Latency(us) 00:32:33.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.398 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.398 nvme0n1 : 2.00 24255.24 94.75 0.00 0.00 5269.16 2362.75 9864.89 00:32:33.398 =================================================================================================================== 00:32:33.398 Total : 24255.24 94.75 0.00 0.00 5269.16 2362.75 9864.89 00:32:33.398 0 00:32:33.399 21:33:27 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:33.399 21:33:27 -- host/digest.sh@93 -- # get_accel_stats 00:32:33.399 21:33:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:33.399 21:33:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:33.399 | select(.opcode=="crc32c") 00:32:33.399 | "\(.module_name) \(.executed)"' 00:32:33.399 21:33:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:33.657 21:33:27 -- host/digest.sh@94 -- # false 00:32:33.657 21:33:27 -- host/digest.sh@94 -- # exp_module=software 00:32:33.657 21:33:27 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:33.657 21:33:27 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:33.657 21:33:27 -- host/digest.sh@98 -- # killprocess 1665433 00:32:33.657 21:33:27 -- common/autotest_common.sh@936 -- # '[' -z 1665433 ']' 00:32:33.657 21:33:27 -- common/autotest_common.sh@940 -- # kill -0 1665433 00:32:33.657 21:33:27 -- common/autotest_common.sh@941 -- # uname 00:32:33.657 21:33:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:33.657 21:33:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1665433 00:32:33.657 21:33:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:33.657 21:33:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:33.657 21:33:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1665433' 00:32:33.657 killing process with pid 1665433 00:32:33.657 21:33:27 -- common/autotest_common.sh@955 -- # kill 1665433 00:32:33.657 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.657 00:32:33.657 Latency(us) 00:32:33.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.657 =================================================================================================================== 00:32:33.657 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.657 21:33:27 -- common/autotest_common.sh@960 -- # wait 1665433 00:32:33.915 21:33:28 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:33.915 21:33:28 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:33.915 21:33:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:33.915 21:33:28 -- host/digest.sh@80 -- # rw=randwrite 00:32:33.915 21:33:28 -- host/digest.sh@80 -- # bs=131072 00:32:33.915 21:33:28 -- host/digest.sh@80 -- # qd=16 00:32:33.915 21:33:28 -- host/digest.sh@80 -- # scan_dsa=false 00:32:33.915 21:33:28 -- host/digest.sh@83 -- # bperfpid=1666793 00:32:33.915 21:33:28 -- host/digest.sh@84 -- # waitforlisten 1666793 /var/tmp/bperf.sock 00:32:33.915 21:33:28 -- common/autotest_common.sh@817 -- # '[' -z 1666793 ']' 00:32:33.915 21:33:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.915 21:33:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:33.915 21:33:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:33.915 21:33:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.915 21:33:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:33.915 21:33:28 -- common/autotest_common.sh@10 -- # set +x 00:32:33.915 [2024-04-23 21:33:28.187145] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:33.915 [2024-04-23 21:33:28.187261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666793 ] 00:32:33.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.915 Zero copy mechanism will not be used. 00:32:34.173 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.173 [2024-04-23 21:33:28.298676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.173 [2024-04-23 21:33:28.387370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.744 21:33:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:34.744 21:33:28 -- common/autotest_common.sh@850 -- # return 0 00:32:34.744 21:33:28 -- host/digest.sh@86 -- # false 00:32:34.744 21:33:28 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:34.744 21:33:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:35.005 21:33:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.005 21:33:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.265 nvme0n1 00:32:35.266 21:33:29 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:35.266 21:33:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.266 Zero copy mechanism will not be used. 00:32:35.266 Running I/O for 2 seconds... 00:32:37.801 00:32:37.801 Latency(us) 00:32:37.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.801 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:37.801 nvme0n1 : 2.01 2487.34 310.92 0.00 0.00 6420.20 4656.51 12141.41 00:32:37.801 =================================================================================================================== 00:32:37.801 Total : 2487.34 310.92 0.00 0.00 6420.20 4656.51 12141.41 00:32:37.801 0 00:32:37.801 21:33:31 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:37.801 21:33:31 -- host/digest.sh@93 -- # get_accel_stats 00:32:37.801 21:33:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:37.801 21:33:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:37.801 | select(.opcode=="crc32c") 00:32:37.801 | "\(.module_name) \(.executed)"' 00:32:37.801 21:33:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:37.801 21:33:31 -- host/digest.sh@94 -- # false 00:32:37.801 21:33:31 -- host/digest.sh@94 -- # exp_module=software 00:32:37.801 21:33:31 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:37.801 21:33:31 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:37.801 21:33:31 -- host/digest.sh@98 -- # killprocess 1666793 00:32:37.801 21:33:31 -- common/autotest_common.sh@936 -- # '[' -z 1666793 ']' 00:32:37.801 21:33:31 -- common/autotest_common.sh@940 -- # kill -0 1666793 00:32:37.801 21:33:31 -- common/autotest_common.sh@941 -- # uname 00:32:37.801 21:33:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:37.801 21:33:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1666793 00:32:37.801 21:33:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:37.801 21:33:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:37.801 21:33:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1666793' 00:32:37.801 killing process with pid 1666793 00:32:37.801 21:33:31 -- common/autotest_common.sh@955 -- # kill 1666793 00:32:37.801 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.801 00:32:37.801 Latency(us) 00:32:37.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.801 =================================================================================================================== 00:32:37.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.801 21:33:31 -- common/autotest_common.sh@960 -- # wait 1666793 00:32:37.801 21:33:32 -- host/digest.sh@132 -- # killprocess 1661800 00:32:37.801 21:33:32 -- common/autotest_common.sh@936 -- # '[' -z 1661800 ']' 00:32:37.801 21:33:32 -- common/autotest_common.sh@940 -- # kill -0 1661800 00:32:37.801 21:33:32 -- common/autotest_common.sh@941 -- # uname 00:32:37.801 21:33:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:37.801 21:33:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1661800 00:32:38.059 21:33:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:38.059 21:33:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:38.059 21:33:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1661800' 00:32:38.059 killing process with pid 1661800 00:32:38.059 21:33:32 -- common/autotest_common.sh@955 -- # kill 1661800 00:32:38.059 21:33:32 -- common/autotest_common.sh@960 -- # wait 1661800 00:32:39.438 00:32:39.438 real 0m23.478s 00:32:39.438 user 0m34.013s 00:32:39.438 sys 0m3.545s 00:32:39.438 21:33:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:39.438 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:32:39.438 ************************************ 00:32:39.438 END TEST nvmf_digest_dsa_target 00:32:39.438 ************************************ 00:32:39.438 21:33:33 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:39.438 21:33:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:39.438 21:33:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.438 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:32:39.697 ************************************ 00:32:39.697 START TEST nvmf_digest_error 00:32:39.697 ************************************ 00:32:39.697 21:33:33 -- common/autotest_common.sh@1111 -- # run_digest_error 00:32:39.697 21:33:33 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:39.697 21:33:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:39.697 21:33:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:39.697 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:32:39.697 21:33:33 -- nvmf/common.sh@470 -- # nvmfpid=1668341 00:32:39.697 21:33:33 -- nvmf/common.sh@471 -- # waitforlisten 1668341 00:32:39.697 21:33:33 -- common/autotest_common.sh@817 -- # '[' -z 1668341 ']' 00:32:39.697 21:33:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.697 21:33:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:39.697 21:33:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.697 21:33:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:39.697 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:32:39.697 21:33:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:39.697 [2024-04-23 21:33:33.827331] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:39.697 [2024-04-23 21:33:33.827434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.697 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.697 [2024-04-23 21:33:33.955051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.956 [2024-04-23 21:33:34.052145] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.956 [2024-04-23 21:33:34.052182] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.956 [2024-04-23 21:33:34.052191] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.956 [2024-04-23 21:33:34.052201] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.956 [2024-04-23 21:33:34.052208] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.956 [2024-04-23 21:33:34.052234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.524 21:33:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:40.524 21:33:34 -- common/autotest_common.sh@850 -- # return 0 00:32:40.524 21:33:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:40.524 21:33:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:40.524 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.524 21:33:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.524 21:33:34 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:40.524 21:33:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.524 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.524 [2024-04-23 21:33:34.568733] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:40.524 21:33:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.524 21:33:34 -- host/digest.sh@105 -- # common_target_config 00:32:40.524 21:33:34 -- host/digest.sh@43 -- # rpc_cmd 00:32:40.524 21:33:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:40.524 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.524 null0 00:32:40.524 [2024-04-23 21:33:34.725220] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.524 [2024-04-23 21:33:34.749373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.524 21:33:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:40.524 21:33:34 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:40.524 21:33:34 -- host/digest.sh@54 -- # local rw bs qd 00:32:40.524 21:33:34 -- host/digest.sh@56 -- # rw=randread 00:32:40.524 21:33:34 -- host/digest.sh@56 -- # bs=4096 00:32:40.524 21:33:34 -- host/digest.sh@56 -- # qd=128 00:32:40.524 21:33:34 -- host/digest.sh@58 -- # bperfpid=1668507 00:32:40.524 21:33:34 -- host/digest.sh@60 -- # waitforlisten 1668507 /var/tmp/bperf.sock 00:32:40.524 21:33:34 -- common/autotest_common.sh@817 -- # '[' -z 1668507 ']' 00:32:40.524 21:33:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.524 21:33:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:40.524 21:33:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.524 21:33:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:40.524 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.524 21:33:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:40.783 [2024-04-23 21:33:34.823404] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:40.783 [2024-04-23 21:33:34.823512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668507 ] 00:32:40.783 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.783 [2024-04-23 21:33:34.932651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.783 [2024-04-23 21:33:35.021843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.351 21:33:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:41.351 21:33:35 -- common/autotest_common.sh@850 -- # return 0 00:32:41.351 21:33:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.351 21:33:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.613 21:33:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:41.613 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.613 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:32:41.613 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.613 21:33:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.613 21:33:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.613 nvme0n1 00:32:41.874 21:33:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:41.874 21:33:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:41.874 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:32:41.874 21:33:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:41.874 21:33:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:41.874 21:33:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.874 Running I/O for 2 seconds... 00:32:41.874 [2024-04-23 21:33:36.001889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.874 [2024-04-23 21:33:36.001939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.874 [2024-04-23 21:33:36.001953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.874 [2024-04-23 21:33:36.012682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.874 [2024-04-23 21:33:36.012716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.012728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.022562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.022593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.022604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.031245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.031271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.031281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.042467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.042494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.042504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.050776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.050803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.061901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.061927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.061936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.070347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.070374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.070384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.081231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.081255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.081265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.090892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.090918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.090928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.099963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.100003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.109783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.109810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.109820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.120397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.120423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.120433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.128840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.128871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.128881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.875 [2024-04-23 21:33:36.139132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:41.875 [2024-04-23 21:33:36.139157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.875 [2024-04-23 21:33:36.139167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.135 [2024-04-23 21:33:36.150579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.135 [2024-04-23 21:33:36.150605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.135 [2024-04-23 21:33:36.150615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.135 [2024-04-23 21:33:36.158954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.135 [2024-04-23 21:33:36.158978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.135 [2024-04-23 21:33:36.158988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.135 [2024-04-23 21:33:36.170322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.170358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.178247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.178273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.178283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.189040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.189065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.189075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.198471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.198498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.198508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.207438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.207465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.207475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.218147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.218173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.218183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.227435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.227461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.227471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.236807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.236834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.236845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.247268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.247305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.255793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.255821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.255832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.266893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.266923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.274725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.274750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.274760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.286331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.286355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.286365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.295346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.295371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.295381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.304977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.305003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.305013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.313749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.313774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.313784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.324245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.324271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.324281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.334773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.334799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.334809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.344414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.344443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.344455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.353558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.353585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.353597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.363196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.363236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.373189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.373216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.373226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.383358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.383385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.383395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.392871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.136 [2024-04-23 21:33:36.392896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.136 [2024-04-23 21:33:36.392906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.136 [2024-04-23 21:33:36.401345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.137 [2024-04-23 21:33:36.401369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.137 [2024-04-23 21:33:36.401379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.411570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.411597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.411607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.421456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.421485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.421498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.431527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.431572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.441059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.441085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.441096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.450945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.450971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.450982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.460857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.460884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.460895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.469647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.469673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.469683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.479799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.479826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.479838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.489723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.489751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.489764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.499873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.499899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.499911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.508671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.508697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.519435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.519465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.519477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.529529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.529556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.529570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.539587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.539612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.539622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.548747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.548774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.548786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.557771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.557796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.557807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.569311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.569340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.569354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.580291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.580316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.580327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.590642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.590666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.590677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.600024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.600051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.600065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.610905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.610945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.621468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.621495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.621507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.635386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.635414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.635426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.646364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.646390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.646402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.396 [2024-04-23 21:33:36.658233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.396 [2024-04-23 21:33:36.658261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.396 [2024-04-23 21:33:36.658274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.655 [2024-04-23 21:33:36.669057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.655 [2024-04-23 21:33:36.669086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.655 [2024-04-23 21:33:36.669100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.655 [2024-04-23 21:33:36.682059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.655 [2024-04-23 21:33:36.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.655 [2024-04-23 21:33:36.682100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.655 [2024-04-23 21:33:36.692616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.655 [2024-04-23 21:33:36.692650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.655 [2024-04-23 21:33:36.692663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.655 [2024-04-23 21:33:36.706051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.655 [2024-04-23 21:33:36.706079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.655 [2024-04-23 21:33:36.706089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.718115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.718143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.718155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.730835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.730863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.730878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.741298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.741326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.741338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.754571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.754598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.768239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.768270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.768283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.779310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.779339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.779351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.792662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.792690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.792703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.803643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.803671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.803688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.816234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.816260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.816271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.826125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.826151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.826163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.836470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.836494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.836505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.846128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.846155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.846165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.855260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.855287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.855298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.866311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.866337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.866349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.877113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.877139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.886256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.886282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.886296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.897700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.897726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.897737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.907767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.907791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.907803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.918357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.918383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.918394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.656 [2024-04-23 21:33:36.927867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.656 [2024-04-23 21:33:36.927892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.656 [2024-04-23 21:33:36.927904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.937298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.937323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.937334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.947658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.947685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.947697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.957599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.957626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.957641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.968207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.968234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.968247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.980635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.980660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.980676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:36.989971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:36.989999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:36.990010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.001034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.001059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.001070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.009945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.009970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.009989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.019121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.019145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.019155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.029580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.029614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.038275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.038303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.038317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.050873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.050900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.050911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.061082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.061109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.061123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.073304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.073342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.084719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.084744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.084756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.093346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.093370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.093384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.916 [2024-04-23 21:33:37.105172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.916 [2024-04-23 21:33:37.105198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.916 [2024-04-23 21:33:37.105209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.117445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.117470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.117480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.125787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.125813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.125824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.137431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.137457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.137467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.148829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.148854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.148864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.157452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.157475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.157493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.167906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.167930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.167942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.176342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.176366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.176375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:42.917 [2024-04-23 21:33:37.185853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:42.917 [2024-04-23 21:33:37.185880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.917 [2024-04-23 21:33:37.185891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.195392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.195418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.195430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.205095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.205119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.205130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.214975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.215000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.215010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.223901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.223927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.223937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.235553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.235578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.235589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.246569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.246595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.246606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.258045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.258071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.258082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.266996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.267021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.267032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.276376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.276401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.276411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.288016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.288039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.288049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.176 [2024-04-23 21:33:37.297919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.176 [2024-04-23 21:33:37.297945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.176 [2024-04-23 21:33:37.297954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.306038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.306064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.306074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.317467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.317492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.317502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.325531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.325555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.336172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.336198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.336208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.347882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.347908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.358316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.358347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.358359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.371066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.371093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.371104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.382558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.382584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.382594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.391211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.391235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.391245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.401300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.401326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.401336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.409743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.409767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.409777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.420581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.420609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.420619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.431248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.431273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.177 [2024-04-23 21:33:37.439780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.177 [2024-04-23 21:33:37.439804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.177 [2024-04-23 21:33:37.439813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.451034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.436 [2024-04-23 21:33:37.451067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.436 [2024-04-23 21:33:37.451077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.461865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.436 [2024-04-23 21:33:37.461890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.436 [2024-04-23 21:33:37.461899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.470466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.436 [2024-04-23 21:33:37.470490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.436 [2024-04-23 21:33:37.470500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.481455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.436 [2024-04-23 21:33:37.481482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.436 [2024-04-23 21:33:37.481493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.489989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.436 [2024-04-23 21:33:37.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.436 [2024-04-23 21:33:37.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.436 [2024-04-23 21:33:37.501310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.501334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.501348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.511838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.511864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.511874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.521003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.521029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.521039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.529573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.529599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.529609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.540199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.540223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.540233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.550158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.550182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.559376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.559405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.559416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.569642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.569668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.569678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.579820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.579845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.589059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.589088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.589099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.599536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.599561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.599571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.610198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.610230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.610241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.618819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.618846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.618856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.632256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.632283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.632293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.644513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.644547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.644558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.652921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.652946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.652955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.664576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.664601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.664611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.674377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.674402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.674417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.683275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.683304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.683314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.693481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.693507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.693517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.437 [2024-04-23 21:33:37.702463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.437 [2024-04-23 21:33:37.702488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.437 [2024-04-23 21:33:37.702498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.713030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.713055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.713065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.723567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.723591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.723600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.733419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.733444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.733454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.741916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.753077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.753101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.753111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.762729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.762758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.762767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.771294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.771319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.771328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.781267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.781292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.781302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.791976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.792000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.792010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.801054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.801078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.801089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.699 [2024-04-23 21:33:37.810288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.699 [2024-04-23 21:33:37.810313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.699 [2024-04-23 21:33:37.810322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.819665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.819690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.819700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.829476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.829501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.829510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.840608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.840639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.840655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.849109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.849137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.849147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.860452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.860493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.870353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.870380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.870392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.879132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.879157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.879168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.889044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.889071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.889081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.898857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.898889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.898902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.907820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.907845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.907857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.918057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.918086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.918098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.930645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.930680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.939481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.939507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.939519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.951525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.951552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.951563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.963632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.963658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.963667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.700 [2024-04-23 21:33:37.971990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.700 [2024-04-23 21:33:37.972019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.700 [2024-04-23 21:33:37.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.960 [2024-04-23 21:33:37.981499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:43.960 [2024-04-23 21:33:37.981526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.960 [2024-04-23 21:33:37.981536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:43.960 00:32:43.960 Latency(us) 00:32:43.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.960 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:43.960 nvme0n1 : 2.05 24325.30 95.02 0.00 0.00 5150.91 2121.30 46358.10 00:32:43.960 =================================================================================================================== 00:32:43.960 Total : 24325.30 95.02 0.00 0.00 5150.91 2121.30 46358.10 00:32:43.960 0 00:32:43.960 21:33:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:43.960 21:33:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:43.960 21:33:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:43.960 21:33:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:43.960 | .driver_specific 00:32:43.960 | .nvme_error 00:32:43.960 | .status_code 00:32:43.960 | .command_transient_transport_error' 00:32:43.960 21:33:38 -- host/digest.sh@71 -- # (( 195 > 0 )) 00:32:43.960 21:33:38 -- host/digest.sh@73 -- # killprocess 1668507 00:32:43.960 21:33:38 -- common/autotest_common.sh@936 -- # '[' -z 1668507 ']' 00:32:43.960 21:33:38 -- common/autotest_common.sh@940 -- # kill -0 1668507 00:32:43.960 21:33:38 -- common/autotest_common.sh@941 -- # uname 00:32:43.960 21:33:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:43.960 21:33:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1668507 00:32:44.222 21:33:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:44.222 21:33:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:44.222 21:33:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1668507' 00:32:44.222 killing process with pid 1668507 00:32:44.222 21:33:38 -- common/autotest_common.sh@955 -- # kill 1668507 00:32:44.222 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.222 00:32:44.222 Latency(us) 00:32:44.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.222 =================================================================================================================== 00:32:44.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.222 21:33:38 -- common/autotest_common.sh@960 -- # wait 1668507 00:32:44.482 21:33:38 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:44.482 21:33:38 -- host/digest.sh@54 -- # local rw bs qd 00:32:44.482 21:33:38 -- host/digest.sh@56 -- # rw=randread 00:32:44.482 21:33:38 -- host/digest.sh@56 -- # bs=131072 00:32:44.482 21:33:38 -- host/digest.sh@56 -- # qd=16 00:32:44.482 21:33:38 -- host/digest.sh@58 -- # bperfpid=1670001 00:32:44.482 21:33:38 -- host/digest.sh@60 -- # waitforlisten 1670001 /var/tmp/bperf.sock 00:32:44.482 21:33:38 -- common/autotest_common.sh@817 -- # '[' -z 1670001 ']' 00:32:44.482 21:33:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.482 21:33:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:44.482 21:33:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.482 21:33:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:44.482 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:32:44.482 21:33:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:44.482 [2024-04-23 21:33:38.726288] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:44.482 [2024-04-23 21:33:38.726416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670001 ] 00:32:44.482 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:44.482 Zero copy mechanism will not be used. 00:32:44.740 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.740 [2024-04-23 21:33:38.844650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.740 [2024-04-23 21:33:38.935019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.306 21:33:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:45.306 21:33:39 -- common/autotest_common.sh@850 -- # return 0 00:32:45.306 21:33:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.306 21:33:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.306 21:33:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.306 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.306 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:32:45.564 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.564 21:33:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.564 21:33:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.824 nvme0n1 00:32:45.825 21:33:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:45.825 21:33:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:45.825 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:32:45.825 21:33:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:45.825 21:33:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.825 21:33:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.825 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.825 Zero copy mechanism will not be used. 00:32:45.825 Running I/O for 2 seconds... 00:32:45.825 [2024-04-23 21:33:39.992941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:39.992990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:39.993005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.004723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.004766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.004781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.015667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.015699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.015715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.026477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.026518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.037610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.037645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.037661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.048373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.048407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.058647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.058670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.058680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.069127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.069156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.069167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.079947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.079976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:45.825 [2024-04-23 21:33:40.090423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:45.825 [2024-04-23 21:33:40.090450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.825 [2024-04-23 21:33:40.090461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.101268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.101306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.112147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.112172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.112182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.123096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.123121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.123131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.133765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.133801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.133811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.144442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.144466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.144476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.155199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.155230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.155239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.165929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.165953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.165963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.176561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.176584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.176594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.187321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.187344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.187353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.197960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.197983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.197992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.208802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.208825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.208834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.219214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.219239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.219248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.229906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.229930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.229940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.240598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.240621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.240635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.251220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.251248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.251258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.262053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.262076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.272679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.272702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.272712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.283355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.283378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.283387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.293906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.087 [2024-04-23 21:33:40.293929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.087 [2024-04-23 21:33:40.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.087 [2024-04-23 21:33:40.304587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.304610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.304620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.088 [2024-04-23 21:33:40.315187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.315211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.315220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.088 [2024-04-23 21:33:40.325886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.325909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.325919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.088 [2024-04-23 21:33:40.336578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.336600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.336610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.088 [2024-04-23 21:33:40.347204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.347227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.347237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.088 [2024-04-23 21:33:40.357909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.088 [2024-04-23 21:33:40.357933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.088 [2024-04-23 21:33:40.357943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.368477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.368502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.368512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.379235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.379268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.379278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.389867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.389893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.389904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.400009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.400034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.400044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.410647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.410671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.410681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.421326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.421350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.421360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.431943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.431967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.431983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.442632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.442656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.442666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.453322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.453355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.463947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.463971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.463981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.474613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.474641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.474650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.485301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.485324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.485334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.495988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.496012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.496022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.350 [2024-04-23 21:33:40.506610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.350 [2024-04-23 21:33:40.506651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.350 [2024-04-23 21:33:40.506662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.517292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.517315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.517325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.528086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.528110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.538739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.538763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.549435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.549458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.549467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.559941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.559967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.559977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.570634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.570658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.570668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.581314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.581340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.581350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.591942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.591966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.591976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.602639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.602663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.602673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.351 [2024-04-23 21:33:40.613330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.351 [2024-04-23 21:33:40.613353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.351 [2024-04-23 21:33:40.613367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.624408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.624435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.624444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.635240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.635264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.635274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.645847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.645870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.645880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.656516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.656539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.656549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.667203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.667226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.667237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.677827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.677854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.688492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.688516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.688526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.699204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.699228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.613 [2024-04-23 21:33:40.699238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.613 [2024-04-23 21:33:40.709838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.613 [2024-04-23 21:33:40.709861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.709871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.720517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.720542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.731215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.731239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.731249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.741835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.741857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.741867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.752544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.752570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.752580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.763220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.763243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.763253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.773909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.773935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.773945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.784606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.784636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.784649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.795286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.795310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.795325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.805906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.805930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.805940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.816588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.816612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.816622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.827271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.827304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.837834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.837857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.837867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.848660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.848683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.848693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.859300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.859323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.859333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.869992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.870015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.870025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.614 [2024-04-23 21:33:40.880773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.614 [2024-04-23 21:33:40.880800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.614 [2024-04-23 21:33:40.880810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.873 [2024-04-23 21:33:40.891371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.873 [2024-04-23 21:33:40.891396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.873 [2024-04-23 21:33:40.891407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.873 [2024-04-23 21:33:40.902056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.873 [2024-04-23 21:33:40.902079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.873 [2024-04-23 21:33:40.902089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.873 [2024-04-23 21:33:40.912808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.873 [2024-04-23 21:33:40.912831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.912842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.923414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.923437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.923447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.934093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.934117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.934127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.944596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.944620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.944635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.955274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.955297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.955307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.965881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.965905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.965914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.976583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.976606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.987269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.987293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.987303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:40.997877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:40.997902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:40.997911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.008301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.008332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.008343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.018895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.018921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.018931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.029569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.029598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.029609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.040021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.040046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.040064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.050633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.050658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.050670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.061329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.061353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.061364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.071956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.071982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.071992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.082557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.082583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.093257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.093283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.093293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.103883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.103907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.103917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.114569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.114594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.114604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.125268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.125291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.135292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.135316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.135326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:46.874 [2024-04-23 21:33:41.145341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:46.874 [2024-04-23 21:33:41.145364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.874 [2024-04-23 21:33:41.145374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.156231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.156256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.156271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.166898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.166924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.166935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.177790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.177818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.177831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.188531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.188555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.188564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.199148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.199171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.199180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.209830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.209851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.209861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.220502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.220525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.220535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.231126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.231149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.231158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.241809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.241832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.241841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.252387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.252410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.252419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.263090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.263112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.263122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.273775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.273798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.273808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.284442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.284466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.284477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.294895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.294918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.294928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.305583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.305606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.305616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.316273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.316297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.133 [2024-04-23 21:33:41.316307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.133 [2024-04-23 21:33:41.328153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.133 [2024-04-23 21:33:41.328176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.328186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.336004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.336029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.336043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.344214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.344240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.344249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.352343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.352373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.352385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.360478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.360507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.360518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.368686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.368715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.368728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.377418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.377446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.377458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.385686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.385715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.385726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.393530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.393556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.393567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.134 [2024-04-23 21:33:41.400423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.134 [2024-04-23 21:33:41.400448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.134 [2024-04-23 21:33:41.400458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.407039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.407065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.407075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.413807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.413832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.413842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.420499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.420522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.420532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.427280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.427305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.427316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.434088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.434112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.434123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.440714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.440737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.440748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.447300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.447332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.447343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.453921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.453944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.460562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.460585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.460600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.467196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.467220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.467230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.473830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.473854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.473864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.480568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.480591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.480601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.487216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.487249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.494250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.494273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.393 [2024-04-23 21:33:41.494283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.393 [2024-04-23 21:33:41.500992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.393 [2024-04-23 21:33:41.501016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.501026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.508145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.508170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.508181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.515094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.515117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.515127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.521792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.521815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.521826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.528573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.528596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.535184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.535208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.541778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.541801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.541811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.548373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.548396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.554988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.555012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.561621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.561650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.561660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.568307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.568330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.568340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.575125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.575164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.581842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.581864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.581874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.588531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.588554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.588564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.595289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.595313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.595324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.601884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.601910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.608465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.608492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.608501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.615064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.615088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.615099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.621665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.621687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.621698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.628337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.628359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.628370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.634964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.634987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.634998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.641583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.641606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.641617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.648288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.648310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.648320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.655123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.655147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.655159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.394 [2024-04-23 21:33:41.661852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.394 [2024-04-23 21:33:41.661875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.394 [2024-04-23 21:33:41.661884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.653 [2024-04-23 21:33:41.668732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.653 [2024-04-23 21:33:41.668757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.653 [2024-04-23 21:33:41.668767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.653 [2024-04-23 21:33:41.675666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.653 [2024-04-23 21:33:41.675690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.653 [2024-04-23 21:33:41.675700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.653 [2024-04-23 21:33:41.682293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.653 [2024-04-23 21:33:41.682316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.653 [2024-04-23 21:33:41.682326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.653 [2024-04-23 21:33:41.688925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.688947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.688961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.695561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.695583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.695593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.702193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.702216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.702229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.708839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.708862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.708872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.715430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.715452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.715463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.722072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.722095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.722105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.728668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.728691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.728701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.735268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.735291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.735301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.741874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.741907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.741917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.748516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.748539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.755121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.755144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.755155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.761759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.761781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.761792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.768383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.768405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.768416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.775025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.775047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.775057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.781659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.781682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.781692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.788278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.788301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.788312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.794911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.794935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.794946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.801589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.801611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.801625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.808241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.808264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.808274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.814878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.814904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.814915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.821506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.821529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.821539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.828184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.828207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.828217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.834830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.834853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.834863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.842074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.842098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.842109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.850705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.850731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.850742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.859139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.859165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.859176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.866445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.654 [2024-04-23 21:33:41.866470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.654 [2024-04-23 21:33:41.866481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.654 [2024-04-23 21:33:41.874699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.874728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.874741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.882863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.882889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.882902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.891219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.891250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.891262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.898644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.898669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.898681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.905270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.905295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.905306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.911910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.911934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.911945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.918589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.918618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.918634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.655 [2024-04-23 21:33:41.925259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.655 [2024-04-23 21:33:41.925282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.655 [2024-04-23 21:33:41.925298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.931867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.931891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.931902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.938447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.938471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.938484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.945018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.945041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.945052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.951605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.951640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.951653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.958199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.958222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.958234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.964836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.964860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.964872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.971422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.971449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.971461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:47.913 [2024-04-23 21:33:41.978026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:32:47.913 [2024-04-23 21:33:41.978050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:47.913 [2024-04-23 21:33:41.978061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:47.913 00:32:47.913 Latency(us) 00:32:47.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:47.913 nvme0n1 : 2.00 3413.87 426.73 0.00 0.00 4682.85 3225.06 11934.45 00:32:47.913 =================================================================================================================== 00:32:47.913 Total : 3413.87 426.73 0.00 0.00 4682.85 3225.06 11934.45 00:32:47.913 0 00:32:47.913 21:33:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.913 21:33:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.913 | .driver_specific 00:32:47.913 | .nvme_error 00:32:47.913 | .status_code 00:32:47.913 | .command_transient_transport_error' 00:32:47.913 21:33:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.913 21:33:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.913 21:33:42 -- host/digest.sh@71 -- # (( 220 > 0 )) 00:32:47.913 21:33:42 -- host/digest.sh@73 -- # killprocess 1670001 00:32:47.913 21:33:42 -- common/autotest_common.sh@936 -- # '[' -z 1670001 ']' 00:32:47.913 21:33:42 -- common/autotest_common.sh@940 -- # kill -0 1670001 00:32:47.913 21:33:42 -- common/autotest_common.sh@941 -- # uname 00:32:47.913 21:33:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:47.913 21:33:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1670001 00:32:48.172 21:33:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:48.172 21:33:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:48.172 21:33:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1670001' 00:32:48.172 killing process with pid 1670001 00:32:48.172 21:33:42 -- common/autotest_common.sh@955 -- # kill 1670001 00:32:48.172 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.172 00:32:48.172 Latency(us) 00:32:48.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.172 =================================================================================================================== 00:32:48.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.172 21:33:42 -- common/autotest_common.sh@960 -- # wait 1670001 00:32:48.430 21:33:42 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:48.430 21:33:42 -- host/digest.sh@54 -- # local rw bs qd 00:32:48.430 21:33:42 -- host/digest.sh@56 -- # rw=randwrite 00:32:48.430 21:33:42 -- host/digest.sh@56 -- # bs=4096 00:32:48.430 21:33:42 -- host/digest.sh@56 -- # qd=128 00:32:48.430 21:33:42 -- host/digest.sh@58 -- # bperfpid=1671157 00:32:48.430 21:33:42 -- host/digest.sh@60 -- # waitforlisten 1671157 /var/tmp/bperf.sock 00:32:48.430 21:33:42 -- common/autotest_common.sh@817 -- # '[' -z 1671157 ']' 00:32:48.430 21:33:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.430 21:33:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:48.430 21:33:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.430 21:33:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:48.430 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:32:48.430 21:33:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:48.430 [2024-04-23 21:33:42.625654] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:48.430 [2024-04-23 21:33:42.625783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671157 ] 00:32:48.430 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.688 [2024-04-23 21:33:42.748428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.688 [2024-04-23 21:33:42.856859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.255 21:33:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:49.255 21:33:43 -- common/autotest_common.sh@850 -- # return 0 00:32:49.255 21:33:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:49.255 21:33:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:49.255 21:33:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:49.255 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:49.255 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:32:49.255 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:49.255 21:33:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.255 21:33:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.513 nvme0n1 00:32:49.513 21:33:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:49.513 21:33:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:49.513 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:32:49.513 21:33:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:49.513 21:33:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:49.513 21:33:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:49.513 Running I/O for 2 seconds... 00:32:49.513 [2024-04-23 21:33:43.775598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.513 [2024-04-23 21:33:43.775812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.513 [2024-04-23 21:33:43.775851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.513 [2024-04-23 21:33:43.786415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.513 [2024-04-23 21:33:43.786604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.513 [2024-04-23 21:33:43.786637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.797130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.797312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.807843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.808026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.818531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.818713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.818735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.829369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.829556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.829579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.840047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.840227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.840249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.850723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.850901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.850924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.861353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.861529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.861550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.872026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.872203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.872224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.882688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.882862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.882882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.893333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.893528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.903941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.904139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.914489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.914661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.914694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.925117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.925292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.925313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.935712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.935887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.935908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.946723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.946901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.946921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.957792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.958001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.969478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.969667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.969689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.981054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.981242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.981264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:43.992394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:43.992581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:43.992603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:44.003316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:44.003491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:44.003512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:44.014102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:44.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:44.014312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:44.024736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:44.024914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:44.024934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:49.772 [2024-04-23 21:33:44.035324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:49.772 [2024-04-23 21:33:44.035501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.772 [2024-04-23 21:33:44.035523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.045949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.046125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.046146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.056601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.056810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.067221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.067397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.067416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.077884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.078062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.078083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.088492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.088663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.099045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.099225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.099246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.109744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.109923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.109944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.120379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.120556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.120576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.131014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.131188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.131208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.141882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.142065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.142088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.152666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.152844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.152865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.163312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.163490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.163511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.173935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.174112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.174133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.184523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.184703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.184723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.197555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.197773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.197804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.209264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.209454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.209477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.220848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.221039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.221061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.232505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.232698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.232720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.244215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.244405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.244426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.256150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.256343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.256366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.267875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.268067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.268088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.279383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.279560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.279581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.031 [2024-04-23 21:33:44.290026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.031 [2024-04-23 21:33:44.290203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.031 [2024-04-23 21:33:44.290223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.032 [2024-04-23 21:33:44.300657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.032 [2024-04-23 21:33:44.300832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.032 [2024-04-23 21:33:44.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.311293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.311470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.311491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.321936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.322109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.332539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.332726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.332746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.343163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.343339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.343359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.353760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.353935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.353956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.364336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.290 [2024-04-23 21:33:44.364511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.290 [2024-04-23 21:33:44.364532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.290 [2024-04-23 21:33:44.375006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.375179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.375200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.385598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.385777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.396217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.396393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.396413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.406828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.407001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.407023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.417524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.417697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.417719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.428128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.428302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.428324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.438783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.438959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.438980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.449356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.449527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.449547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.459993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.460168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.460188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.470576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.470758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.470778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.481273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.481473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.491923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.492100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.492120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.502581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.502764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.502784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.513231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.513408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.513430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.523872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.524047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.524067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.534508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.534680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.534701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.545105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.545283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.545303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.291 [2024-04-23 21:33:44.555756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.291 [2024-04-23 21:33:44.555933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.291 [2024-04-23 21:33:44.555954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.567394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.567592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.567618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.580231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.580476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.593817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.594036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.594060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.607634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.607852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.607878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.620919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.621124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.621147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.633316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.633510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.633532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.644898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.645074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.645095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.655510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.655684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.655705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.666110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.666282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.666302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.676695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.676875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.687346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.687525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.687545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.697981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.698156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.698177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.708684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.708860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.719358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.719534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.719554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.729960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.730133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.740573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.740752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.740772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.751220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.551 [2024-04-23 21:33:44.751393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.551 [2024-04-23 21:33:44.751413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.551 [2024-04-23 21:33:44.761817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.761994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.762016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.552 [2024-04-23 21:33:44.772401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.772575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.772597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.552 [2024-04-23 21:33:44.783045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.783221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.783242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.552 [2024-04-23 21:33:44.793692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.793867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.793888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.552 [2024-04-23 21:33:44.804325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.804500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.804523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.552 [2024-04-23 21:33:44.814950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.552 [2024-04-23 21:33:44.815125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.552 [2024-04-23 21:33:44.815147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.826158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.826349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.826372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.837933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.838126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.838148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.849895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.850096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.850117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.861593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.861788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.861814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.873319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.873509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.873531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.884873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.885048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.895525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.895719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.906124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.906295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.906315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.916792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.916970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.927348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.927541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.938002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.938178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.938199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.948580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.948763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.959222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.959405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.959426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.969856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.970031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.970051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.980455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.980635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.980657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:44.991069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:44.991244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:44.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.001669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.001861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:45.001884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.012333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.012509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:45.012530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.022924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.023100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:45.023120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.033716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.033898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:45.033926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.044367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.044545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.814 [2024-04-23 21:33:45.044580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.814 [2024-04-23 21:33:45.055033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.814 [2024-04-23 21:33:45.055211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.815 [2024-04-23 21:33:45.055234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.815 [2024-04-23 21:33:45.065616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.815 [2024-04-23 21:33:45.065804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.815 [2024-04-23 21:33:45.065827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.815 [2024-04-23 21:33:45.076178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.815 [2024-04-23 21:33:45.076357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.815 [2024-04-23 21:33:45.076381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:50.815 [2024-04-23 21:33:45.086793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:50.815 [2024-04-23 21:33:45.086970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:50.815 [2024-04-23 21:33:45.086994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.097411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.097590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.097610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.108029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.108208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.108230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.118639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.118818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.118840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.129242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.129419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.129442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.140133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.140323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.140344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.150786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.150970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.150992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.161405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.161584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.161607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.172048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.172224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.172245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.182670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.182846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.182866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.193293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.193470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.193492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.203926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.204105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.204127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.214554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.214735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.214756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.225128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.225303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.225323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.235772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.235945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.235967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.246367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.246544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.246567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.256983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.257162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.257186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.076 [2024-04-23 21:33:45.267583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.076 [2024-04-23 21:33:45.267766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.076 [2024-04-23 21:33:45.267788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.278159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.278339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.278361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.288739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.288919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.288940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.299363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.299535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.299555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.309970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.310147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.310170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.320600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.320808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.331225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.331401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.331423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.077 [2024-04-23 21:33:45.341786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.077 [2024-04-23 21:33:45.341962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.077 [2024-04-23 21:33:45.341982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.352431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.352606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.352626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.363063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.363266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.373671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.373852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.384309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.384484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.384507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.394935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.395117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.395139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.405544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.405733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.405755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.416203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.416382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.416405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.426807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.426982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.427003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.437420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.437599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.437619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.448016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.338 [2024-04-23 21:33:45.448217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.338 [2024-04-23 21:33:45.458659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.338 [2024-04-23 21:33:45.458840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.458865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.469297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.469475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.469497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.479933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.480113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.480137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.490549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.490742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.490764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.501185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.501360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.501387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.511917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.512116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.522521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.522696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.522716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.533150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.533326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.533356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.543734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.543911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.543932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.554341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.554520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.554542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.564973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.565151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.565174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.575578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.575765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.575786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.586186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.586369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.586392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.596800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.596984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.597005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.339 [2024-04-23 21:33:45.607420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.339 [2024-04-23 21:33:45.607597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.339 [2024-04-23 21:33:45.607618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.618127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.618307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.618327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.628774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.628952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.628973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.639391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.639569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.639590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.650034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.650210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.650231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.660664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.660843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.660863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.671251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.671429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.671450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.681879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.682055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.682080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.598 [2024-04-23 21:33:45.693249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.598 [2024-04-23 21:33:45.693468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.598 [2024-04-23 21:33:45.693493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 [2024-04-23 21:33:45.705475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.599 [2024-04-23 21:33:45.705649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.599 [2024-04-23 21:33:45.705671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 [2024-04-23 21:33:45.716873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.599 [2024-04-23 21:33:45.717063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.599 [2024-04-23 21:33:45.717085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 [2024-04-23 21:33:45.728567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.599 [2024-04-23 21:33:45.728770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.599 [2024-04-23 21:33:45.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 [2024-04-23 21:33:45.740806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.599 [2024-04-23 21:33:45.741015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.599 [2024-04-23 21:33:45.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 [2024-04-23 21:33:45.753880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:32:51.599 [2024-04-23 21:33:45.754101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:51.599 [2024-04-23 21:33:45.754126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:51.599 00:32:51.599 Latency(us) 00:32:51.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.599 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.599 nvme0n1 : 2.01 23335.59 91.15 0.00 0.00 5473.47 4897.95 14555.89 00:32:51.599 =================================================================================================================== 00:32:51.599 Total : 23335.59 91.15 0.00 0.00 5473.47 4897.95 14555.89 00:32:51.599 0 00:32:51.599 21:33:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:51.599 21:33:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:51.599 21:33:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:51.599 | .driver_specific 00:32:51.599 | .nvme_error 00:32:51.599 | .status_code 00:32:51.599 | .command_transient_transport_error' 00:32:51.599 21:33:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:51.914 21:33:45 -- host/digest.sh@71 -- # (( 183 > 0 )) 00:32:51.914 21:33:45 -- host/digest.sh@73 -- # killprocess 1671157 00:32:51.914 21:33:45 -- common/autotest_common.sh@936 -- # '[' -z 1671157 ']' 00:32:51.914 21:33:45 -- common/autotest_common.sh@940 -- # kill -0 1671157 00:32:51.914 21:33:45 -- common/autotest_common.sh@941 -- # uname 00:32:51.914 21:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:51.914 21:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1671157 00:32:51.914 21:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:51.914 21:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:51.914 21:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1671157' 00:32:51.914 killing process with pid 1671157 00:32:51.914 21:33:45 -- common/autotest_common.sh@955 -- # kill 1671157 00:32:51.914 Received shutdown signal, test time was about 2.000000 seconds 00:32:51.914 00:32:51.914 Latency(us) 00:32:51.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.914 =================================================================================================================== 00:32:51.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.914 21:33:45 -- common/autotest_common.sh@960 -- # wait 1671157 00:32:52.173 21:33:46 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:52.173 21:33:46 -- host/digest.sh@54 -- # local rw bs qd 00:32:52.173 21:33:46 -- host/digest.sh@56 -- # rw=randwrite 00:32:52.173 21:33:46 -- host/digest.sh@56 -- # bs=131072 00:32:52.173 21:33:46 -- host/digest.sh@56 -- # qd=16 00:32:52.173 21:33:46 -- host/digest.sh@58 -- # bperfpid=1672522 00:32:52.173 21:33:46 -- host/digest.sh@60 -- # waitforlisten 1672522 /var/tmp/bperf.sock 00:32:52.173 21:33:46 -- common/autotest_common.sh@817 -- # '[' -z 1672522 ']' 00:32:52.173 21:33:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.173 21:33:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:52.173 21:33:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.173 21:33:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:52.173 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:32:52.173 21:33:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:52.173 [2024-04-23 21:33:46.407799] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:52.173 [2024-04-23 21:33:46.407909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1672522 ] 00:32:52.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:52.173 Zero copy mechanism will not be used. 00:32:52.431 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.431 [2024-04-23 21:33:46.496140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.431 [2024-04-23 21:33:46.585320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.001 21:33:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:53.001 21:33:47 -- common/autotest_common.sh@850 -- # return 0 00:32:53.001 21:33:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:53.001 21:33:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:53.001 21:33:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:53.001 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.001 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:32:53.001 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.001 21:33:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.001 21:33:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.571 nvme0n1 00:32:53.571 21:33:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:53.571 21:33:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.571 21:33:47 -- common/autotest_common.sh@10 -- # set +x 00:32:53.571 21:33:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.571 21:33:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:53.571 21:33:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.571 Zero copy mechanism will not be used. 00:32:53.571 Running I/O for 2 seconds... 00:32:53.571 [2024-04-23 21:33:47.691819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.692281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.709762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.710089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.710120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.729014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.729418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.729444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.750428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.750822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.772815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.773349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.773375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.794045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.794454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.794478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.815711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.816113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.571 [2024-04-23 21:33:47.834836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.571 [2024-04-23 21:33:47.835355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.571 [2024-04-23 21:33:47.835379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.855160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.855564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.855588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.875499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.875901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.875925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.894946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.895340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.895363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.913524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.914104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.914127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.933728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.934216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.934241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.953686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.954256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.954281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.973926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.974453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:47.993793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:47.994357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:47.994387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:48.013682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:48.014061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:48.014084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:48.035088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:48.035488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:48.035512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:48.055555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:48.055855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:48.055880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:48.073984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:48.074415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:48.074440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:53.830 [2024-04-23 21:33:48.092694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:53.830 [2024-04-23 21:33:48.092993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:53.830 [2024-04-23 21:33:48.093017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.112976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.113378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.113402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.127067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.127344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.137756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.137959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.149020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.149220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.149246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.160320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.160577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.160601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.171519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.171777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.183272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.089 [2024-04-23 21:33:48.183571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.089 [2024-04-23 21:33:48.194680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.089 [2024-04-23 21:33:48.194948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.194972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.206358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.206611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.206636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.217978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.218257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.229476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.229739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.229761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.241701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.241974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.242001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.253698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.253966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.253987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.266016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.266283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.266305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.277688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.277985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.278009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.289799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.290074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.290099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.303595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.303874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.303898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.316427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.316715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.316737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.329223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.329441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.329464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.340760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.341011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.341042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.090 [2024-04-23 21:33:48.352734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.090 [2024-04-23 21:33:48.353064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.090 [2024-04-23 21:33:48.353089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.349 [2024-04-23 21:33:48.364362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.349 [2024-04-23 21:33:48.364635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.349 [2024-04-23 21:33:48.364671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.349 [2024-04-23 21:33:48.376675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.349 [2024-04-23 21:33:48.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.349 [2024-04-23 21:33:48.376970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.349 [2024-04-23 21:33:48.387024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.349 [2024-04-23 21:33:48.387280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.349 [2024-04-23 21:33:48.387301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.349 [2024-04-23 21:33:48.398164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.349 [2024-04-23 21:33:48.398417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.398447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.409870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.410133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.410163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.421538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.421795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.432045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.432300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.443721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.443977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.444010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.455584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.455837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.455861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.467835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.468108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.468133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.480602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.480908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.494893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.495189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.495213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.507998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.508320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.508345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.521334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.521614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.521643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.533659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.533907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.533929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.545666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.545958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.545981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.557626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.557889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.557911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.569191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.569387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.569406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.580948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.581311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.581335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.593054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.593309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.593337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.604319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.604555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.604575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.350 [2024-04-23 21:33:48.615644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.350 [2024-04-23 21:33:48.615900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.350 [2024-04-23 21:33:48.615922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.627085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.627342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.627366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.638798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.639049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.639071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.650808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.651061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.651091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.661481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.661740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.661761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.672948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.673146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.684724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.684992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.685016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.695565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.695839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.695860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.706910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.707162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.707184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.718476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.718734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.718757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.729618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.729877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.729898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.740653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.740909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.740931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.752296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.752558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.752580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.764086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.764292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.764317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.777365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.777636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.789344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.789543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.789564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.800481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.800747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.800767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.812150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.812403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.812432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.823508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.823788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.823811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.835380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.835634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.835656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.609 [2024-04-23 21:33:48.847421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.609 [2024-04-23 21:33:48.847697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.609 [2024-04-23 21:33:48.847722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.610 [2024-04-23 21:33:48.859611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.610 [2024-04-23 21:33:48.859914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.610 [2024-04-23 21:33:48.859942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.610 [2024-04-23 21:33:48.872623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.610 [2024-04-23 21:33:48.872955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.610 [2024-04-23 21:33:48.872982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.886486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.886840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.899803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.900098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.900123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.912515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.912819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.912844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.925867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.926164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.926190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.938981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.939273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.939298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.952505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.952723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.952749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.966907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.967210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.967236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.979924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.980201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.980224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:48.992027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:48.992298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:48.992321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:49.003967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:49.004225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:49.004252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:49.015889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:49.016141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.868 [2024-04-23 21:33:49.016166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.868 [2024-04-23 21:33:49.027546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.868 [2024-04-23 21:33:49.027819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.027842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.038778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.039034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.039056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.050675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.050907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.050928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.062454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.062714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.062736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.073777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.074031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.085103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.085339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.085360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.096462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.096720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.096741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.107684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.107935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.107956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.119685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.119935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.119964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:54.869 [2024-04-23 21:33:49.131496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:54.869 [2024-04-23 21:33:49.131749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.869 [2024-04-23 21:33:49.131770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.142786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.143048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.143074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.154357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.154671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.167765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.168061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.168092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.181383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.181684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.181708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.194616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.194901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.194925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.207525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.207798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.219744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.220009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.220031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.231489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.231746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.231769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.242980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.243175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.243196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.253861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.254113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.254136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.265183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.265436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.265465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.277019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.277286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.277308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.288057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.288309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.288331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.299389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.299585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.299606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.310733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.130 [2024-04-23 21:33:49.310985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.130 [2024-04-23 21:33:49.311008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.130 [2024-04-23 21:33:49.322494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.322747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.322769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.334641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.334896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.334919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.345620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.345900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.357668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.357918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.357941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.370252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.370528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.370555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.381425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.381601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.381624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.131 [2024-04-23 21:33:49.393519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.131 [2024-04-23 21:33:49.393795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.131 [2024-04-23 21:33:49.393819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.406389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.406692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.406717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.418444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.418727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.430960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.431228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.431251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.443190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.443460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.454874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.455142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.455168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.465624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.465881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.465904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.477354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.477608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.477674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.488479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.488715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.488735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.499927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.500177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.500199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.511718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.511977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.512000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.523826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.524083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.524105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.393 [2024-04-23 21:33:49.535256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.393 [2024-04-23 21:33:49.535452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.393 [2024-04-23 21:33:49.535473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.547307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.547500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.547521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.559091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.559344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.559401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.570188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.570493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.570519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.581770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.582023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.582046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.592963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.593217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.593239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.603636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.603889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.603913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.615221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.615475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.615496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.626709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.626972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.626993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.637669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.637921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.649688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.649939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.649968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:55.394 [2024-04-23 21:33:49.660796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.394 [2024-04-23 21:33:49.660958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.394 [2024-04-23 21:33:49.660980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:55.657 [2024-04-23 21:33:49.672138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:32:55.657 [2024-04-23 21:33:49.672326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:55.657 [2024-04-23 21:33:49.672348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:55.657 00:32:55.657 Latency(us) 00:32:55.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.657 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:55.657 nvme0n1 : 2.01 2367.97 296.00 0.00 0.00 6744.50 4794.48 22489.20 00:32:55.657 =================================================================================================================== 00:32:55.657 Total : 2367.97 296.00 0.00 0.00 6744.50 4794.48 22489.20 00:32:55.657 0 00:32:55.657 21:33:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:55.658 21:33:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:55.658 21:33:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:55.658 21:33:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:55.658 | .driver_specific 00:32:55.658 | .nvme_error 00:32:55.658 | .status_code 00:32:55.658 | .command_transient_transport_error' 00:32:55.658 21:33:49 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:32:55.658 21:33:49 -- host/digest.sh@73 -- # killprocess 1672522 00:32:55.658 21:33:49 -- common/autotest_common.sh@936 -- # '[' -z 1672522 ']' 00:32:55.658 21:33:49 -- common/autotest_common.sh@940 -- # kill -0 1672522 00:32:55.658 21:33:49 -- common/autotest_common.sh@941 -- # uname 00:32:55.658 21:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:55.658 21:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1672522 00:32:55.658 21:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:55.658 21:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:55.658 21:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1672522' 00:32:55.658 killing process with pid 1672522 00:32:55.658 21:33:49 -- common/autotest_common.sh@955 -- # kill 1672522 00:32:55.658 Received shutdown signal, test time was about 2.000000 seconds 00:32:55.658 00:32:55.658 Latency(us) 00:32:55.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.658 =================================================================================================================== 00:32:55.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:55.658 21:33:49 -- common/autotest_common.sh@960 -- # wait 1672522 00:32:56.227 21:33:50 -- host/digest.sh@116 -- # killprocess 1668341 00:32:56.227 21:33:50 -- common/autotest_common.sh@936 -- # '[' -z 1668341 ']' 00:32:56.227 21:33:50 -- common/autotest_common.sh@940 -- # kill -0 1668341 00:32:56.227 21:33:50 -- common/autotest_common.sh@941 -- # uname 00:32:56.227 21:33:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:56.227 21:33:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1668341 00:32:56.227 21:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:56.227 21:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:56.227 21:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1668341' 00:32:56.227 killing process with pid 1668341 00:32:56.227 21:33:50 -- common/autotest_common.sh@955 -- # kill 1668341 00:32:56.227 21:33:50 -- common/autotest_common.sh@960 -- # wait 1668341 00:32:56.794 00:32:56.794 real 0m17.057s 00:32:56.794 user 0m32.797s 00:32:56.794 sys 0m3.257s 00:32:56.794 21:33:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:56.794 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:32:56.794 ************************************ 00:32:56.794 END TEST nvmf_digest_error 00:32:56.794 ************************************ 00:32:56.794 21:33:50 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:56.794 21:33:50 -- host/digest.sh@150 -- # nvmftestfini 00:32:56.794 21:33:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:56.794 21:33:50 -- nvmf/common.sh@117 -- # sync 00:32:56.794 21:33:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:56.794 21:33:50 -- nvmf/common.sh@120 -- # set +e 00:32:56.794 21:33:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:56.794 21:33:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:56.794 rmmod nvme_tcp 00:32:56.794 rmmod nvme_fabrics 00:32:56.794 rmmod nvme_keyring 00:32:56.794 21:33:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:56.794 21:33:50 -- nvmf/common.sh@124 -- # set -e 00:32:56.794 21:33:50 -- nvmf/common.sh@125 -- # return 0 00:32:56.794 21:33:50 -- nvmf/common.sh@478 -- # '[' -n 1668341 ']' 00:32:56.794 21:33:50 -- nvmf/common.sh@479 -- # killprocess 1668341 00:32:56.794 21:33:50 -- common/autotest_common.sh@936 -- # '[' -z 1668341 ']' 00:32:56.794 21:33:50 -- common/autotest_common.sh@940 -- # kill -0 1668341 00:32:56.794 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1668341) - No such process 00:32:56.794 21:33:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1668341 is not found' 00:32:56.794 Process with pid 1668341 is not found 00:32:56.794 21:33:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:56.794 21:33:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:56.794 21:33:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:56.794 21:33:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:56.794 21:33:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:56.794 21:33:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.794 21:33:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:56.794 21:33:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.715 21:33:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:58.715 00:32:58.715 real 1m30.657s 00:32:58.715 user 2m11.061s 00:32:58.715 sys 0m14.648s 00:32:58.715 21:33:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:58.715 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:32:58.715 ************************************ 00:32:58.715 END TEST nvmf_digest 00:32:58.715 ************************************ 00:32:58.715 21:33:52 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:32:58.715 21:33:52 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:32:58.715 21:33:52 -- nvmf/nvmf.sh@118 -- # [[ phy-fallback == phy ]] 00:32:58.715 21:33:52 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:32:58.716 21:33:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:58.716 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:32:58.977 21:33:52 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:32:58.977 00:32:58.977 real 23m12.875s 00:32:58.977 user 60m15.917s 00:32:58.977 sys 5m21.899s 00:32:58.978 21:33:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:58.978 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:32:58.978 ************************************ 00:32:58.978 END TEST nvmf_tcp 00:32:58.978 ************************************ 00:32:58.978 21:33:53 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:32:58.978 21:33:53 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:58.978 21:33:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:58.978 21:33:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:58.978 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:32:58.978 ************************************ 00:32:58.978 START TEST spdkcli_nvmf_tcp 00:32:58.978 ************************************ 00:32:58.978 21:33:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:58.978 * Looking for test storage... 00:32:58.978 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:32:58.978 21:33:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:58.978 21:33:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.978 21:33:53 -- nvmf/common.sh@7 -- # uname -s 00:32:58.978 21:33:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.978 21:33:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.978 21:33:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.978 21:33:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.978 21:33:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.978 21:33:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.978 21:33:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.978 21:33:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.978 21:33:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.978 21:33:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.978 21:33:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:58.978 21:33:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:32:58.978 21:33:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.978 21:33:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.978 21:33:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:58.978 21:33:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.978 21:33:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:58.978 21:33:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.978 21:33:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.978 21:33:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.978 21:33:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.978 21:33:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.978 21:33:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.978 21:33:53 -- paths/export.sh@5 -- # export PATH 00:32:58.978 21:33:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.978 21:33:53 -- nvmf/common.sh@47 -- # : 0 00:32:58.978 21:33:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:58.978 21:33:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:58.978 21:33:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.978 21:33:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.978 21:33:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.978 21:33:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:58.978 21:33:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:58.978 21:33:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:58.978 21:33:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:58.978 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:32:58.978 21:33:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:58.978 21:33:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1674980 00:32:58.978 21:33:53 -- spdkcli/common.sh@34 -- # waitforlisten 1674980 00:32:58.978 21:33:53 -- common/autotest_common.sh@817 -- # '[' -z 1674980 ']' 00:32:58.978 21:33:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.978 21:33:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:58.978 21:33:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.978 21:33:53 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:58.978 21:33:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:58.978 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:32:59.236 [2024-04-23 21:33:53.277582] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:32:59.236 [2024-04-23 21:33:53.277694] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1674980 ] 00:32:59.236 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.236 [2024-04-23 21:33:53.399245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.236 [2024-04-23 21:33:53.494637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.236 [2024-04-23 21:33:53.494640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.803 21:33:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:59.803 21:33:53 -- common/autotest_common.sh@850 -- # return 0 00:32:59.803 21:33:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:59.803 21:33:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:59.803 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:32:59.803 21:33:54 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:59.803 21:33:54 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:59.803 21:33:54 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:59.803 21:33:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:59.803 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:32:59.803 21:33:54 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:59.803 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:59.803 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:59.803 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:59.803 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:59.803 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:59.803 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:59.803 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.804 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.804 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:59.804 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:59.804 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:59.804 ' 00:33:00.062 [2024-04-23 21:33:54.314299] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:02.597 [2024-04-23 21:33:56.367304] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.531 [2024-04-23 21:33:57.528921] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:05.436 [2024-04-23 21:33:59.659663] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:07.340 [2024-04-23 21:34:01.497984] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:08.726 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:08.726 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:08.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:08.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:08.726 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:08.726 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:08.726 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:08.988 21:34:03 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:08.988 21:34:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:08.988 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:33:08.988 21:34:03 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:08.988 21:34:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:08.988 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:33:08.988 21:34:03 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:08.988 21:34:03 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:09.248 21:34:03 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:09.248 21:34:03 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:09.248 21:34:03 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:09.248 21:34:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:09.248 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:33:09.248 21:34:03 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:09.248 21:34:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:09.248 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:33:09.248 21:34:03 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:09.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:09.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:09.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:09.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:09.248 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:09.248 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:09.248 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:09.248 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:09.248 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:09.248 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:09.249 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:09.249 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:09.249 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:09.249 ' 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:14.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:14.525 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:14.525 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:14.525 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:14.525 21:34:08 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:14.525 21:34:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:14.525 21:34:08 -- common/autotest_common.sh@10 -- # set +x 00:33:14.525 21:34:08 -- spdkcli/nvmf.sh@90 -- # killprocess 1674980 00:33:14.525 21:34:08 -- common/autotest_common.sh@936 -- # '[' -z 1674980 ']' 00:33:14.525 21:34:08 -- common/autotest_common.sh@940 -- # kill -0 1674980 00:33:14.525 21:34:08 -- common/autotest_common.sh@941 -- # uname 00:33:14.525 21:34:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.525 21:34:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1674980 00:33:14.525 21:34:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:14.525 21:34:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:14.525 21:34:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1674980' 00:33:14.525 killing process with pid 1674980 00:33:14.525 21:34:08 -- common/autotest_common.sh@955 -- # kill 1674980 00:33:14.525 [2024-04-23 21:34:08.556633] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transpor 21:34:08 -- common/autotest_common.sh@960 -- # wait 1674980 00:33:14.525 t is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:14.783 21:34:09 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:14.783 21:34:09 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:14.783 21:34:09 -- spdkcli/common.sh@13 -- # '[' -n 1674980 ']' 00:33:14.783 21:34:09 -- spdkcli/common.sh@14 -- # killprocess 1674980 00:33:14.783 21:34:09 -- common/autotest_common.sh@936 -- # '[' -z 1674980 ']' 00:33:14.783 21:34:09 -- common/autotest_common.sh@940 -- # kill -0 1674980 00:33:14.783 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1674980) - No such process 00:33:14.783 21:34:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1674980 is not found' 00:33:14.783 Process with pid 1674980 is not found 00:33:14.783 21:34:09 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:14.783 21:34:09 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:14.783 21:34:09 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:14.783 00:33:14.783 real 0m15.928s 00:33:14.783 user 0m32.304s 00:33:14.783 sys 0m0.767s 00:33:14.783 21:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:14.783 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:33:14.783 ************************************ 00:33:14.783 END TEST spdkcli_nvmf_tcp 00:33:14.783 ************************************ 00:33:15.042 21:34:09 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:15.042 21:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:15.042 21:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:15.042 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:33:15.042 ************************************ 00:33:15.042 START TEST nvmf_identify_passthru 00:33:15.042 ************************************ 00:33:15.042 21:34:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:15.042 * Looking for test storage... 00:33:15.042 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:33:15.043 21:34:09 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.043 21:34:09 -- nvmf/common.sh@7 -- # uname -s 00:33:15.043 21:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.043 21:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.043 21:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.043 21:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.043 21:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.043 21:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.043 21:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.043 21:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.043 21:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.043 21:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.043 21:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:15.043 21:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:15.043 21:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.043 21:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.043 21:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:15.043 21:34:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.043 21:34:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:15.043 21:34:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.043 21:34:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.043 21:34:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.043 21:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@5 -- # export PATH 00:33:15.043 21:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- nvmf/common.sh@47 -- # : 0 00:33:15.043 21:34:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:15.043 21:34:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:15.043 21:34:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.043 21:34:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.043 21:34:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.043 21:34:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:15.043 21:34:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:15.043 21:34:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:15.043 21:34:09 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:15.043 21:34:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.043 21:34:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.043 21:34:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.043 21:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- paths/export.sh@5 -- # export PATH 00:33:15.043 21:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.043 21:34:09 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:15.043 21:34:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:15.043 21:34:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.043 21:34:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:15.043 21:34:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:15.043 21:34:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:15.043 21:34:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.043 21:34:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:15.043 21:34:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.043 21:34:09 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:33:15.043 21:34:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:15.043 21:34:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:15.043 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:33:20.313 21:34:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:20.313 21:34:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:20.313 21:34:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:20.313 21:34:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:20.313 21:34:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:20.313 21:34:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:20.313 21:34:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:20.313 21:34:14 -- nvmf/common.sh@295 -- # net_devs=() 00:33:20.313 21:34:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:20.313 21:34:14 -- nvmf/common.sh@296 -- # e810=() 00:33:20.313 21:34:14 -- nvmf/common.sh@296 -- # local -ga e810 00:33:20.313 21:34:14 -- nvmf/common.sh@297 -- # x722=() 00:33:20.313 21:34:14 -- nvmf/common.sh@297 -- # local -ga x722 00:33:20.313 21:34:14 -- nvmf/common.sh@298 -- # mlx=() 00:33:20.313 21:34:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:20.313 21:34:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.313 21:34:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:20.313 21:34:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:20.313 21:34:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:20.313 21:34:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:20.313 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:20.313 21:34:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:20.313 21:34:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:20.313 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:20.313 21:34:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:20.313 21:34:14 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:20.313 21:34:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:20.313 21:34:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.314 21:34:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:20.314 21:34:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.314 21:34:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:20.314 Found net devices under 0000:27:00.0: cvl_0_0 00:33:20.314 21:34:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.314 21:34:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:20.314 21:34:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.314 21:34:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:20.314 21:34:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.314 21:34:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:20.314 Found net devices under 0000:27:00.1: cvl_0_1 00:33:20.314 21:34:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.314 21:34:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:20.314 21:34:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:20.314 21:34:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:20.314 21:34:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:20.314 21:34:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:20.314 21:34:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.314 21:34:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.314 21:34:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.314 21:34:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:20.314 21:34:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.314 21:34:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.314 21:34:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:20.314 21:34:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.314 21:34:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.314 21:34:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:20.314 21:34:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:20.314 21:34:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.314 21:34:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.314 21:34:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.314 21:34:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.314 21:34:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:20.314 21:34:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.314 21:34:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.314 21:34:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.314 21:34:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:20.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:33:20.314 00:33:20.314 --- 10.0.0.2 ping statistics --- 00:33:20.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.314 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:33:20.314 21:34:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:33:20.314 00:33:20.314 --- 10.0.0.1 ping statistics --- 00:33:20.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.314 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:33:20.314 21:34:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.314 21:34:14 -- nvmf/common.sh@411 -- # return 0 00:33:20.314 21:34:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:20.314 21:34:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.314 21:34:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:20.314 21:34:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:20.314 21:34:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.314 21:34:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:20.314 21:34:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:20.314 21:34:14 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:20.314 21:34:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:20.314 21:34:14 -- common/autotest_common.sh@10 -- # set +x 00:33:20.314 21:34:14 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:20.314 21:34:14 -- common/autotest_common.sh@1510 -- # bdfs=() 00:33:20.314 21:34:14 -- common/autotest_common.sh@1510 -- # local bdfs 00:33:20.314 21:34:14 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:33:20.314 21:34:14 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:33:20.314 21:34:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:33:20.314 21:34:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:33:20.314 21:34:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:20.314 21:34:14 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:20.314 21:34:14 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:33:20.575 21:34:14 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:33:20.575 21:34:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:33:20.575 21:34:14 -- common/autotest_common.sh@1513 -- # echo 0000:03:00.0 00:33:20.575 21:34:14 -- target/identify_passthru.sh@16 -- # bdf=0000:03:00.0 00:33:20.575 21:34:14 -- target/identify_passthru.sh@17 -- # '[' -z 0000:03:00.0 ']' 00:33:20.575 21:34:14 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:20.575 21:34:14 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:33:20.575 21:34:14 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:20.575 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.956 21:34:15 -- target/identify_passthru.sh@23 -- # nvme_serial_number=233442AA2262 00:33:21.956 21:34:15 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:21.956 21:34:15 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:03:00.0' -i 0 00:33:21.956 21:34:15 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:21.956 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.895 21:34:17 -- target/identify_passthru.sh@24 -- # nvme_model_number=Micron_7450_MTFDKBA960TFR 00:33:22.895 21:34:17 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:22.895 21:34:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:22.895 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:33:22.895 21:34:17 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:22.895 21:34:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:22.895 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:33:22.895 21:34:17 -- target/identify_passthru.sh@31 -- # nvmfpid=1684798 00:33:22.895 21:34:17 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:22.895 21:34:17 -- target/identify_passthru.sh@35 -- # waitforlisten 1684798 00:33:22.895 21:34:17 -- common/autotest_common.sh@817 -- # '[' -z 1684798 ']' 00:33:22.895 21:34:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.895 21:34:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:22.895 21:34:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.895 21:34:17 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:22.895 21:34:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:22.895 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:33:23.155 [2024-04-23 21:34:17.192339] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:33:23.155 [2024-04-23 21:34:17.192475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.155 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.155 [2024-04-23 21:34:17.335517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.415 [2024-04-23 21:34:17.454971] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.415 [2024-04-23 21:34:17.455018] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.415 [2024-04-23 21:34:17.455030] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.415 [2024-04-23 21:34:17.455040] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.415 [2024-04-23 21:34:17.455048] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.415 [2024-04-23 21:34:17.455139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.415 [2024-04-23 21:34:17.455166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.415 [2024-04-23 21:34:17.455269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.415 [2024-04-23 21:34:17.455281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.674 21:34:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:23.674 21:34:17 -- common/autotest_common.sh@850 -- # return 0 00:33:23.674 21:34:17 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:23.674 21:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:23.674 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:33:23.674 INFO: Log level set to 20 00:33:23.674 INFO: Requests: 00:33:23.674 { 00:33:23.674 "jsonrpc": "2.0", 00:33:23.674 "method": "nvmf_set_config", 00:33:23.674 "id": 1, 00:33:23.674 "params": { 00:33:23.674 "admin_cmd_passthru": { 00:33:23.674 "identify_ctrlr": true 00:33:23.674 } 00:33:23.674 } 00:33:23.674 } 00:33:23.674 00:33:23.674 INFO: response: 00:33:23.674 { 00:33:23.674 "jsonrpc": "2.0", 00:33:23.674 "id": 1, 00:33:23.674 "result": true 00:33:23.674 } 00:33:23.674 00:33:23.674 21:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:23.674 21:34:17 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:23.674 21:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:23.674 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:33:23.674 INFO: Setting log level to 20 00:33:23.674 INFO: Setting log level to 20 00:33:23.674 INFO: Log level set to 20 00:33:23.674 INFO: Log level set to 20 00:33:23.674 INFO: Requests: 00:33:23.674 { 00:33:23.674 "jsonrpc": "2.0", 00:33:23.674 "method": "framework_start_init", 00:33:23.674 "id": 1 00:33:23.674 } 00:33:23.674 00:33:23.674 INFO: Requests: 00:33:23.674 { 00:33:23.674 "jsonrpc": "2.0", 00:33:23.674 "method": "framework_start_init", 00:33:23.675 "id": 1 00:33:23.675 } 00:33:23.675 00:33:23.933 [2024-04-23 21:34:18.095337] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:23.933 INFO: response: 00:33:23.933 { 00:33:23.933 "jsonrpc": "2.0", 00:33:23.933 "id": 1, 00:33:23.933 "result": true 00:33:23.933 } 00:33:23.933 00:33:23.933 INFO: response: 00:33:23.933 { 00:33:23.933 "jsonrpc": "2.0", 00:33:23.933 "id": 1, 00:33:23.933 "result": true 00:33:23.933 } 00:33:23.933 00:33:23.933 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:23.933 21:34:18 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.933 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:23.933 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:23.933 INFO: Setting log level to 40 00:33:23.933 INFO: Setting log level to 40 00:33:23.933 INFO: Setting log level to 40 00:33:23.933 [2024-04-23 21:34:18.109438] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.933 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:23.933 21:34:18 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:23.933 21:34:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:23.933 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:23.933 21:34:18 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:03:00.0 00:33:23.933 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:23.934 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.501 Nvme0n1 00:33:24.501 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.501 21:34:18 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:24.501 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:24.501 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.501 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.501 21:34:18 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:24.501 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:24.501 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.501 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.501 21:34:18 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.501 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:24.501 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.501 [2024-04-23 21:34:18.548335] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.501 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.501 21:34:18 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:24.501 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:24.501 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.501 [2024-04-23 21:34:18.556064] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:24.501 [ 00:33:24.501 { 00:33:24.501 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:24.501 "subtype": "Discovery", 00:33:24.501 "listen_addresses": [], 00:33:24.501 "allow_any_host": true, 00:33:24.501 "hosts": [] 00:33:24.501 }, 00:33:24.501 { 00:33:24.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:24.501 "subtype": "NVMe", 00:33:24.501 "listen_addresses": [ 00:33:24.501 { 00:33:24.501 "transport": "TCP", 00:33:24.501 "trtype": "TCP", 00:33:24.501 "adrfam": "IPv4", 00:33:24.501 "traddr": "10.0.0.2", 00:33:24.501 "trsvcid": "4420" 00:33:24.501 } 00:33:24.501 ], 00:33:24.501 "allow_any_host": true, 00:33:24.501 "hosts": [], 00:33:24.501 "serial_number": "SPDK00000000000001", 00:33:24.501 "model_number": "SPDK bdev Controller", 00:33:24.501 "max_namespaces": 1, 00:33:24.501 "min_cntlid": 1, 00:33:24.501 "max_cntlid": 65519, 00:33:24.501 "namespaces": [ 00:33:24.501 { 00:33:24.501 "nsid": 1, 00:33:24.501 "bdev_name": "Nvme0n1", 00:33:24.501 "name": "Nvme0n1", 00:33:24.501 "nguid": "000000000000000100A0752342AA2262", 00:33:24.501 "uuid": "00000000-0000-0001-00a0-752342aa2262" 00:33:24.501 } 00:33:24.501 ] 00:33:24.501 } 00:33:24.501 ] 00:33:24.501 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.501 21:34:18 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:24.501 21:34:18 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.501 21:34:18 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:24.501 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.501 21:34:18 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=233442AA2262 00:33:24.501 21:34:18 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:24.501 21:34:18 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:24.501 21:34:18 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:24.759 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.759 21:34:18 -- target/identify_passthru.sh@61 -- # nvmf_model_number=Micron_7450_MTFDKBA960TFR 00:33:24.759 21:34:18 -- target/identify_passthru.sh@63 -- # '[' 233442AA2262 '!=' 233442AA2262 ']' 00:33:24.759 21:34:18 -- target/identify_passthru.sh@68 -- # '[' Micron_7450_MTFDKBA960TFR '!=' Micron_7450_MTFDKBA960TFR ']' 00:33:24.759 21:34:18 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.759 21:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:24.759 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:33:24.759 21:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:24.759 21:34:18 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:24.759 21:34:18 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:24.759 21:34:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:24.759 21:34:18 -- nvmf/common.sh@117 -- # sync 00:33:24.759 21:34:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.759 21:34:18 -- nvmf/common.sh@120 -- # set +e 00:33:24.759 21:34:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.759 21:34:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.759 rmmod nvme_tcp 00:33:24.759 rmmod nvme_fabrics 00:33:24.759 rmmod nvme_keyring 00:33:24.759 21:34:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.759 21:34:19 -- nvmf/common.sh@124 -- # set -e 00:33:24.759 21:34:19 -- nvmf/common.sh@125 -- # return 0 00:33:24.759 21:34:19 -- nvmf/common.sh@478 -- # '[' -n 1684798 ']' 00:33:24.759 21:34:19 -- nvmf/common.sh@479 -- # killprocess 1684798 00:33:24.759 21:34:19 -- common/autotest_common.sh@936 -- # '[' -z 1684798 ']' 00:33:24.759 21:34:19 -- common/autotest_common.sh@940 -- # kill -0 1684798 00:33:24.759 21:34:19 -- common/autotest_common.sh@941 -- # uname 00:33:24.759 21:34:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:25.020 21:34:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1684798 00:33:25.020 21:34:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:25.020 21:34:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:25.020 21:34:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1684798' 00:33:25.020 killing process with pid 1684798 00:33:25.020 21:34:19 -- common/autotest_common.sh@955 -- # kill 1684798 00:33:25.020 [2024-04-23 21:34:19.068140] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:25.020 21:34:19 -- common/autotest_common.sh@960 -- # wait 1684798 00:33:26.399 21:34:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:26.399 21:34:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:26.399 21:34:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:26.399 21:34:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.399 21:34:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:26.399 21:34:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.399 21:34:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:26.399 21:34:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.382 21:34:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:28.382 00:33:28.382 real 0m13.165s 00:33:28.382 user 0m13.760s 00:33:28.382 sys 0m4.896s 00:33:28.382 21:34:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:28.382 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:33:28.382 ************************************ 00:33:28.382 END TEST nvmf_identify_passthru 00:33:28.382 ************************************ 00:33:28.382 21:34:22 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.382 21:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:28.382 21:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:28.382 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:33:28.382 ************************************ 00:33:28.382 START TEST nvmf_dif 00:33:28.382 ************************************ 00:33:28.382 21:34:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:28.382 * Looking for test storage... 00:33:28.382 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:33:28.382 21:34:22 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.383 21:34:22 -- nvmf/common.sh@7 -- # uname -s 00:33:28.383 21:34:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.383 21:34:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.383 21:34:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.383 21:34:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.383 21:34:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.383 21:34:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.383 21:34:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.383 21:34:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.383 21:34:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.383 21:34:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.383 21:34:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:28.383 21:34:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:33:28.383 21:34:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.383 21:34:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.383 21:34:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:33:28.383 21:34:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.383 21:34:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:28.383 21:34:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.383 21:34:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.383 21:34:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.383 21:34:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.383 21:34:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.383 21:34:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.383 21:34:22 -- paths/export.sh@5 -- # export PATH 00:33:28.383 21:34:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.383 21:34:22 -- nvmf/common.sh@47 -- # : 0 00:33:28.383 21:34:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:28.383 21:34:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:28.383 21:34:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.383 21:34:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.383 21:34:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.383 21:34:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:28.383 21:34:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:28.383 21:34:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:28.383 21:34:22 -- target/dif.sh@15 -- # NULL_META=16 00:33:28.383 21:34:22 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:28.383 21:34:22 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:28.383 21:34:22 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:28.383 21:34:22 -- target/dif.sh@135 -- # nvmftestinit 00:33:28.383 21:34:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:28.383 21:34:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.383 21:34:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:28.383 21:34:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:28.383 21:34:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:28.383 21:34:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.383 21:34:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.383 21:34:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.383 21:34:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:33:28.383 21:34:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:33:28.383 21:34:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:28.383 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:33:33.660 21:34:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:33.660 21:34:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:33.660 21:34:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:33.660 21:34:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:33.660 21:34:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:33.660 21:34:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:33.660 21:34:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:33.660 21:34:27 -- nvmf/common.sh@295 -- # net_devs=() 00:33:33.660 21:34:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:33.660 21:34:27 -- nvmf/common.sh@296 -- # e810=() 00:33:33.660 21:34:27 -- nvmf/common.sh@296 -- # local -ga e810 00:33:33.660 21:34:27 -- nvmf/common.sh@297 -- # x722=() 00:33:33.660 21:34:27 -- nvmf/common.sh@297 -- # local -ga x722 00:33:33.660 21:34:27 -- nvmf/common.sh@298 -- # mlx=() 00:33:33.660 21:34:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:33.660 21:34:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.660 21:34:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:33.660 21:34:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:33.660 21:34:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.660 21:34:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:33:33.660 Found 0000:27:00.0 (0x8086 - 0x159b) 00:33:33.660 21:34:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:33.660 21:34:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:33:33.660 Found 0000:27:00.1 (0x8086 - 0x159b) 00:33:33.660 21:34:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:33.660 21:34:27 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:33:33.660 21:34:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.660 21:34:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.660 21:34:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:33.660 21:34:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.661 21:34:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:33:33.661 Found net devices under 0000:27:00.0: cvl_0_0 00:33:33.661 21:34:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.661 21:34:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:33.661 21:34:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.661 21:34:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:33:33.661 21:34:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.661 21:34:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:33:33.661 Found net devices under 0000:27:00.1: cvl_0_1 00:33:33.661 21:34:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.661 21:34:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:33:33.661 21:34:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:33:33.661 21:34:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:33:33.661 21:34:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:33:33.661 21:34:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:33:33.661 21:34:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.661 21:34:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.661 21:34:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.661 21:34:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:33.661 21:34:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.661 21:34:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.661 21:34:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:33.661 21:34:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.661 21:34:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.661 21:34:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:33.661 21:34:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:33.661 21:34:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.661 21:34:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.661 21:34:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.661 21:34:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.661 21:34:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:33.661 21:34:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.919 21:34:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.919 21:34:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.919 21:34:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:33.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:33:33.919 00:33:33.919 --- 10.0.0.2 ping statistics --- 00:33:33.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.920 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:33:33.920 21:34:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.493 ms 00:33:33.920 00:33:33.920 --- 10.0.0.1 ping statistics --- 00:33:33.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.920 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:33:33.920 21:34:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.920 21:34:27 -- nvmf/common.sh@411 -- # return 0 00:33:33.920 21:34:27 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:33:33.920 21:34:27 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:33:36.456 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:c9:00.0 (144d a80a): Already using the vfio-pci driver 00:33:36.456 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:33:36.456 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:33:36.456 0000:03:00.0 (1344 51c3): Already using the vfio-pci driver 00:33:36.456 21:34:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.456 21:34:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:36.456 21:34:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:36.456 21:34:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.456 21:34:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:36.456 21:34:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:36.456 21:34:30 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:36.456 21:34:30 -- target/dif.sh@137 -- # nvmfappstart 00:33:36.456 21:34:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:36.456 21:34:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:36.456 21:34:30 -- common/autotest_common.sh@10 -- # set +x 00:33:36.456 21:34:30 -- nvmf/common.sh@470 -- # nvmfpid=1691465 00:33:36.456 21:34:30 -- nvmf/common.sh@471 -- # waitforlisten 1691465 00:33:36.456 21:34:30 -- common/autotest_common.sh@817 -- # '[' -z 1691465 ']' 00:33:36.456 21:34:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.456 21:34:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:36.456 21:34:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.456 21:34:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:36.456 21:34:30 -- common/autotest_common.sh@10 -- # set +x 00:33:36.456 21:34:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:36.715 [2024-04-23 21:34:30.791775] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:33:36.715 [2024-04-23 21:34:30.791874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.715 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.715 [2024-04-23 21:34:30.907592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.975 [2024-04-23 21:34:30.998043] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.975 [2024-04-23 21:34:30.998078] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.975 [2024-04-23 21:34:30.998087] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.975 [2024-04-23 21:34:30.998097] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.975 [2024-04-23 21:34:30.998103] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.975 [2024-04-23 21:34:30.998126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.546 21:34:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:37.546 21:34:31 -- common/autotest_common.sh@850 -- # return 0 00:33:37.546 21:34:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:37.546 21:34:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 21:34:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.546 21:34:31 -- target/dif.sh@139 -- # create_transport 00:33:37.546 21:34:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:37.546 21:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 [2024-04-23 21:34:31.553490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.546 21:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:37.546 21:34:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:37.546 21:34:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:37.546 21:34:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 ************************************ 00:33:37.546 START TEST fio_dif_1_default 00:33:37.546 ************************************ 00:33:37.546 21:34:31 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:33:37.546 21:34:31 -- target/dif.sh@86 -- # create_subsystems 0 00:33:37.546 21:34:31 -- target/dif.sh@28 -- # local sub 00:33:37.546 21:34:31 -- target/dif.sh@30 -- # for sub in "$@" 00:33:37.546 21:34:31 -- target/dif.sh@31 -- # create_subsystem 0 00:33:37.546 21:34:31 -- target/dif.sh@18 -- # local sub_id=0 00:33:37.546 21:34:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:37.546 21:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 bdev_null0 00:33:37.546 21:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:37.546 21:34:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:37.546 21:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 21:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:37.546 21:34:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:37.546 21:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 21:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:37.546 21:34:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:37.546 21:34:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:37.546 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:33:37.546 [2024-04-23 21:34:31.665700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.546 21:34:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:37.546 21:34:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:37.546 21:34:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.546 21:34:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:37.546 21:34:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:37.546 21:34:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:37.546 21:34:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:37.546 21:34:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:37.546 21:34:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:37.546 21:34:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.546 21:34:31 -- common/autotest_common.sh@1327 -- # shift 00:33:37.546 21:34:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:37.546 21:34:31 -- nvmf/common.sh@521 -- # config=() 00:33:37.546 21:34:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:37.546 21:34:31 -- nvmf/common.sh@521 -- # local subsystem config 00:33:37.546 21:34:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:37.546 21:34:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:37.546 { 00:33:37.546 "params": { 00:33:37.546 "name": "Nvme$subsystem", 00:33:37.546 "trtype": "$TEST_TRANSPORT", 00:33:37.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.546 "adrfam": "ipv4", 00:33:37.546 "trsvcid": "$NVMF_PORT", 00:33:37.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.546 "hdgst": ${hdgst:-false}, 00:33:37.546 "ddgst": ${ddgst:-false} 00:33:37.546 }, 00:33:37.546 "method": "bdev_nvme_attach_controller" 00:33:37.546 } 00:33:37.546 EOF 00:33:37.546 )") 00:33:37.546 21:34:31 -- target/dif.sh@82 -- # gen_fio_conf 00:33:37.546 21:34:31 -- target/dif.sh@54 -- # local file 00:33:37.546 21:34:31 -- target/dif.sh@56 -- # cat 00:33:37.546 21:34:31 -- nvmf/common.sh@543 -- # cat 00:33:37.546 21:34:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:33:37.546 21:34:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:37.546 21:34:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:37.546 21:34:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:37.546 21:34:31 -- target/dif.sh@72 -- # (( file <= files )) 00:33:37.546 21:34:31 -- nvmf/common.sh@545 -- # jq . 00:33:37.546 21:34:31 -- nvmf/common.sh@546 -- # IFS=, 00:33:37.546 21:34:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:37.546 "params": { 00:33:37.546 "name": "Nvme0", 00:33:37.546 "trtype": "tcp", 00:33:37.546 "traddr": "10.0.0.2", 00:33:37.546 "adrfam": "ipv4", 00:33:37.546 "trsvcid": "4420", 00:33:37.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:37.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:37.546 "hdgst": false, 00:33:37.546 "ddgst": false 00:33:37.546 }, 00:33:37.546 "method": "bdev_nvme_attach_controller" 00:33:37.546 }' 00:33:37.547 21:34:31 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:37.547 21:34:31 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:37.547 21:34:31 -- common/autotest_common.sh@1333 -- # break 00:33:37.547 21:34:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:37.547 21:34:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:38.114 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:38.114 fio-3.35 00:33:38.114 Starting 1 thread 00:33:38.114 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.317 00:33:50.317 filename0: (groupid=0, jobs=1): err= 0: pid=1691998: Tue Apr 23 21:34:42 2024 00:33:50.317 read: IOPS=188, BW=752KiB/s (770kB/s)(7552KiB/10039msec) 00:33:50.317 slat (nsec): min=5968, max=35749, avg=7271.75, stdev=2235.23 00:33:50.317 clat (usec): min=548, max=42229, avg=21247.01, stdev=20214.42 00:33:50.317 lat (usec): min=554, max=42265, avg=21254.28, stdev=20214.09 00:33:50.317 clat percentiles (usec): 00:33:50.317 | 1.00th=[ 660], 5.00th=[ 725], 10.00th=[ 758], 20.00th=[ 775], 00:33:50.317 | 30.00th=[ 783], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:33:50.317 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:50.317 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:50.317 | 99.99th=[42206] 00:33:50.317 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=753.60, stdev=28.39, samples=20 00:33:50.317 iops : min= 168, max= 192, avg=188.40, stdev= 7.10, samples=20 00:33:50.317 lat (usec) : 750=6.89%, 1000=42.48% 00:33:50.317 lat (msec) : 50=50.64% 00:33:50.317 cpu : usr=95.82%, sys=3.87%, ctx=18, majf=0, minf=1634 00:33:50.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:50.317 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:50.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:50.317 00:33:50.317 Run status group 0 (all jobs): 00:33:50.317 READ: bw=752KiB/s (770kB/s), 752KiB/s-752KiB/s (770kB/s-770kB/s), io=7552KiB (7733kB), run=10039-10039msec 00:33:50.317 ----------------------------------------------------- 00:33:50.317 Suppressions used: 00:33:50.317 count bytes template 00:33:50.317 1 8 /usr/src/fio/parse.c 00:33:50.317 1 8 libtcmalloc_minimal.so 00:33:50.317 1 904 libcrypto.so 00:33:50.317 ----------------------------------------------------- 00:33:50.317 00:33:50.317 21:34:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:50.317 21:34:43 -- target/dif.sh@43 -- # local sub 00:33:50.317 21:34:43 -- target/dif.sh@45 -- # for sub in "$@" 00:33:50.317 21:34:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:50.317 21:34:43 -- target/dif.sh@36 -- # local sub_id=0 00:33:50.317 21:34:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 00:33:50.317 real 0m11.772s 00:33:50.317 user 0m22.793s 00:33:50.317 sys 0m0.830s 00:33:50.317 21:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 ************************************ 00:33:50.317 END TEST fio_dif_1_default 00:33:50.317 ************************************ 00:33:50.317 21:34:43 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:50.317 21:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:50.317 21:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 ************************************ 00:33:50.317 START TEST fio_dif_1_multi_subsystems 00:33:50.317 ************************************ 00:33:50.317 21:34:43 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:33:50.317 21:34:43 -- target/dif.sh@92 -- # local files=1 00:33:50.317 21:34:43 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:50.317 21:34:43 -- target/dif.sh@28 -- # local sub 00:33:50.317 21:34:43 -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.317 21:34:43 -- target/dif.sh@31 -- # create_subsystem 0 00:33:50.317 21:34:43 -- target/dif.sh@18 -- # local sub_id=0 00:33:50.317 21:34:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 bdev_null0 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 [2024-04-23 21:34:43.548447] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.317 21:34:43 -- target/dif.sh@31 -- # create_subsystem 1 00:33:50.317 21:34:43 -- target/dif.sh@18 -- # local sub_id=1 00:33:50.317 21:34:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.317 bdev_null1 00:33:50.317 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.317 21:34:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:50.317 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.317 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.318 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.318 21:34:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:50.318 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.318 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.318 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.318 21:34:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.318 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:50.318 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:33:50.318 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:50.318 21:34:43 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:50.318 21:34:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.318 21:34:43 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.318 21:34:43 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:50.318 21:34:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:50.318 21:34:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:50.318 21:34:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:50.318 21:34:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:50.318 21:34:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.318 21:34:43 -- common/autotest_common.sh@1327 -- # shift 00:33:50.318 21:34:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:50.318 21:34:43 -- nvmf/common.sh@521 -- # config=() 00:33:50.318 21:34:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.318 21:34:43 -- nvmf/common.sh@521 -- # local subsystem config 00:33:50.318 21:34:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:50.318 21:34:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:50.318 { 00:33:50.318 "params": { 00:33:50.318 "name": "Nvme$subsystem", 00:33:50.318 "trtype": "$TEST_TRANSPORT", 00:33:50.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.318 "adrfam": "ipv4", 00:33:50.318 "trsvcid": "$NVMF_PORT", 00:33:50.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.318 "hdgst": ${hdgst:-false}, 00:33:50.318 "ddgst": ${ddgst:-false} 00:33:50.318 }, 00:33:50.318 "method": "bdev_nvme_attach_controller" 00:33:50.318 } 00:33:50.318 EOF 00:33:50.318 )") 00:33:50.318 21:34:43 -- target/dif.sh@82 -- # gen_fio_conf 00:33:50.318 21:34:43 -- target/dif.sh@54 -- # local file 00:33:50.318 21:34:43 -- target/dif.sh@56 -- # cat 00:33:50.318 21:34:43 -- nvmf/common.sh@543 -- # cat 00:33:50.318 21:34:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.318 21:34:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:50.318 21:34:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:50.318 21:34:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:50.318 21:34:43 -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.318 21:34:43 -- target/dif.sh@73 -- # cat 00:33:50.318 21:34:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:33:50.318 21:34:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:33:50.318 { 00:33:50.318 "params": { 00:33:50.318 "name": "Nvme$subsystem", 00:33:50.318 "trtype": "$TEST_TRANSPORT", 00:33:50.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.318 "adrfam": "ipv4", 00:33:50.318 "trsvcid": "$NVMF_PORT", 00:33:50.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.318 "hdgst": ${hdgst:-false}, 00:33:50.318 "ddgst": ${ddgst:-false} 00:33:50.318 }, 00:33:50.318 "method": "bdev_nvme_attach_controller" 00:33:50.318 } 00:33:50.318 EOF 00:33:50.318 )") 00:33:50.318 21:34:43 -- nvmf/common.sh@543 -- # cat 00:33:50.318 21:34:43 -- target/dif.sh@72 -- # (( file++ )) 00:33:50.318 21:34:43 -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.318 21:34:43 -- nvmf/common.sh@545 -- # jq . 00:33:50.318 21:34:43 -- nvmf/common.sh@546 -- # IFS=, 00:33:50.318 21:34:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:33:50.318 "params": { 00:33:50.318 "name": "Nvme0", 00:33:50.318 "trtype": "tcp", 00:33:50.318 "traddr": "10.0.0.2", 00:33:50.318 "adrfam": "ipv4", 00:33:50.318 "trsvcid": "4420", 00:33:50.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:50.318 "hdgst": false, 00:33:50.318 "ddgst": false 00:33:50.318 }, 00:33:50.318 "method": "bdev_nvme_attach_controller" 00:33:50.318 },{ 00:33:50.318 "params": { 00:33:50.318 "name": "Nvme1", 00:33:50.318 "trtype": "tcp", 00:33:50.318 "traddr": "10.0.0.2", 00:33:50.318 "adrfam": "ipv4", 00:33:50.318 "trsvcid": "4420", 00:33:50.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:50.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:50.318 "hdgst": false, 00:33:50.318 "ddgst": false 00:33:50.318 }, 00:33:50.318 "method": "bdev_nvme_attach_controller" 00:33:50.318 }' 00:33:50.318 21:34:43 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:50.318 21:34:43 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:50.318 21:34:43 -- common/autotest_common.sh@1333 -- # break 00:33:50.318 21:34:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:50.318 21:34:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.318 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:50.318 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:50.318 fio-3.35 00:33:50.318 Starting 2 threads 00:33:50.318 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.523 00:34:02.523 filename0: (groupid=0, jobs=1): err= 0: pid=1694958: Tue Apr 23 21:34:54 2024 00:34:02.523 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10009msec) 00:34:02.523 slat (nsec): min=3475, max=19461, avg=6709.44, stdev=1079.41 00:34:02.523 clat (usec): min=708, max=44198, avg=21551.29, stdev=20203.25 00:34:02.523 lat (usec): min=715, max=44218, avg=21558.00, stdev=20202.96 00:34:02.523 clat percentiles (usec): 00:34:02.523 | 1.00th=[ 799], 5.00th=[ 1172], 10.00th=[ 1254], 20.00th=[ 1303], 00:34:02.523 | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[41157], 60.00th=[41681], 00:34:02.523 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:02.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:34:02.523 | 99.99th=[44303] 00:34:02.523 bw ( KiB/s): min= 672, max= 768, per=49.85%, avg=740.80, stdev=33.28, samples=20 00:34:02.523 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:34:02.523 lat (usec) : 750=0.38%, 1000=2.48% 00:34:02.523 lat (msec) : 2=46.93%, 50=50.22% 00:34:02.523 cpu : usr=98.01%, sys=1.69%, ctx=19, majf=0, minf=1634 00:34:02.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.523 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.523 filename1: (groupid=0, jobs=1): err= 0: pid=1694959: Tue Apr 23 21:34:54 2024 00:34:02.523 read: IOPS=185, BW=743KiB/s (761kB/s)(7440KiB/10014msec) 00:34:02.523 slat (nsec): min=5932, max=25426, avg=7014.93, stdev=1701.38 00:34:02.523 clat (usec): min=741, max=42916, avg=21516.05, stdev=20190.72 00:34:02.523 lat (usec): min=747, max=42942, avg=21523.07, stdev=20190.39 00:34:02.523 clat percentiles (usec): 00:34:02.523 | 1.00th=[ 799], 5.00th=[ 1237], 10.00th=[ 1287], 20.00th=[ 1303], 00:34:02.523 | 30.00th=[ 1303], 40.00th=[ 1336], 50.00th=[41157], 60.00th=[41681], 00:34:02.523 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:02.523 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:02.523 | 99.99th=[42730] 00:34:02.523 bw ( KiB/s): min= 704, max= 768, per=49.99%, avg=742.40, stdev=32.17, samples=20 00:34:02.523 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:34:02.523 lat (usec) : 750=0.22%, 1000=1.77% 00:34:02.523 lat (msec) : 2=47.90%, 50=50.11% 00:34:02.523 cpu : usr=98.07%, sys=1.64%, ctx=13, majf=0, minf=1636 00:34:02.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.523 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.523 00:34:02.523 Run status group 0 (all jobs): 00:34:02.523 READ: bw=1484KiB/s (1520kB/s), 742KiB/s-743KiB/s (760kB/s-761kB/s), io=14.5MiB (15.2MB), run=10009-10014msec 00:34:02.523 ----------------------------------------------------- 00:34:02.523 Suppressions used: 00:34:02.523 count bytes template 00:34:02.523 2 16 /usr/src/fio/parse.c 00:34:02.523 1 8 libtcmalloc_minimal.so 00:34:02.523 1 904 libcrypto.so 00:34:02.523 ----------------------------------------------------- 00:34:02.524 00:34:02.524 21:34:55 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:02.524 21:34:55 -- target/dif.sh@43 -- # local sub 00:34:02.524 21:34:55 -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.524 21:34:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.524 21:34:55 -- target/dif.sh@36 -- # local sub_id=0 00:34:02.524 21:34:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.524 21:34:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:02.524 21:34:55 -- target/dif.sh@36 -- # local sub_id=1 00:34:02.524 21:34:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 00:34:02.524 real 0m12.223s 00:34:02.524 user 0m39.640s 00:34:02.524 sys 0m0.781s 00:34:02.524 21:34:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 ************************************ 00:34:02.524 END TEST fio_dif_1_multi_subsystems 00:34:02.524 ************************************ 00:34:02.524 21:34:55 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:02.524 21:34:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:02.524 21:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 ************************************ 00:34:02.524 START TEST fio_dif_rand_params 00:34:02.524 ************************************ 00:34:02.524 21:34:55 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:34:02.524 21:34:55 -- target/dif.sh@100 -- # local NULL_DIF 00:34:02.524 21:34:55 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:02.524 21:34:55 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:02.524 21:34:55 -- target/dif.sh@103 -- # bs=128k 00:34:02.524 21:34:55 -- target/dif.sh@103 -- # numjobs=3 00:34:02.524 21:34:55 -- target/dif.sh@103 -- # iodepth=3 00:34:02.524 21:34:55 -- target/dif.sh@103 -- # runtime=5 00:34:02.524 21:34:55 -- target/dif.sh@105 -- # create_subsystems 0 00:34:02.524 21:34:55 -- target/dif.sh@28 -- # local sub 00:34:02.524 21:34:55 -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.524 21:34:55 -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.524 21:34:55 -- target/dif.sh@18 -- # local sub_id=0 00:34:02.524 21:34:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 bdev_null0 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.524 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:02.524 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:34:02.524 [2024-04-23 21:34:55.874710] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.524 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:02.524 21:34:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:02.524 21:34:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.524 21:34:55 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.524 21:34:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:02.524 21:34:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.524 21:34:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:02.524 21:34:55 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.524 21:34:55 -- common/autotest_common.sh@1327 -- # shift 00:34:02.524 21:34:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:02.524 21:34:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.524 21:34:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:02.524 21:34:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:02.524 21:34:55 -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.524 21:34:55 -- nvmf/common.sh@521 -- # config=() 00:34:02.524 21:34:55 -- target/dif.sh@54 -- # local file 00:34:02.524 21:34:55 -- nvmf/common.sh@521 -- # local subsystem config 00:34:02.524 21:34:55 -- target/dif.sh@56 -- # cat 00:34:02.524 21:34:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:02.524 21:34:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:02.524 { 00:34:02.524 "params": { 00:34:02.524 "name": "Nvme$subsystem", 00:34:02.524 "trtype": "$TEST_TRANSPORT", 00:34:02.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.524 "adrfam": "ipv4", 00:34:02.524 "trsvcid": "$NVMF_PORT", 00:34:02.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.524 "hdgst": ${hdgst:-false}, 00:34:02.524 "ddgst": ${ddgst:-false} 00:34:02.524 }, 00:34:02.524 "method": "bdev_nvme_attach_controller" 00:34:02.524 } 00:34:02.524 EOF 00:34:02.524 )") 00:34:02.524 21:34:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.524 21:34:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:02.524 21:34:55 -- nvmf/common.sh@543 -- # cat 00:34:02.524 21:34:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:02.524 21:34:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.524 21:34:55 -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.524 21:34:55 -- nvmf/common.sh@545 -- # jq . 00:34:02.524 21:34:55 -- nvmf/common.sh@546 -- # IFS=, 00:34:02.524 21:34:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:02.524 "params": { 00:34:02.524 "name": "Nvme0", 00:34:02.524 "trtype": "tcp", 00:34:02.524 "traddr": "10.0.0.2", 00:34:02.524 "adrfam": "ipv4", 00:34:02.524 "trsvcid": "4420", 00:34:02.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.524 "hdgst": false, 00:34:02.524 "ddgst": false 00:34:02.524 }, 00:34:02.524 "method": "bdev_nvme_attach_controller" 00:34:02.524 }' 00:34:02.524 21:34:55 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:02.524 21:34:55 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:02.524 21:34:55 -- common/autotest_common.sh@1333 -- # break 00:34:02.524 21:34:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:02.524 21:34:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.524 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:02.524 ... 00:34:02.524 fio-3.35 00:34:02.524 Starting 3 threads 00:34:02.524 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.798 00:34:07.798 filename0: (groupid=0, jobs=1): err= 0: pid=1697677: Tue Apr 23 21:35:01 2024 00:34:07.798 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(180MiB/5003msec) 00:34:07.798 slat (nsec): min=5595, max=27540, avg=8556.25, stdev=2464.19 00:34:07.798 clat (usec): min=3586, max=89376, avg=10388.23, stdev=11736.44 00:34:07.798 lat (usec): min=3592, max=89383, avg=10396.78, stdev=11736.37 00:34:07.798 clat percentiles (usec): 00:34:07.798 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5800], 00:34:07.798 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7308], 00:34:07.798 | 70.00th=[ 7832], 80.00th=[ 8586], 90.00th=[ 9896], 95.00th=[48497], 00:34:07.798 | 99.00th=[50594], 99.50th=[51119], 99.90th=[59507], 99.95th=[89654], 00:34:07.798 | 99.99th=[89654] 00:34:07.798 bw ( KiB/s): min=25088, max=48384, per=39.52%, avg=36889.60, stdev=6757.46, samples=10 00:34:07.798 iops : min= 196, max= 378, avg=288.20, stdev=52.79, samples=10 00:34:07.798 lat (msec) : 4=0.07%, 10=90.23%, 20=1.66%, 50=6.17%, 100=1.87% 00:34:07.798 cpu : usr=96.30%, sys=3.36%, ctx=8, majf=0, minf=1634 00:34:07.798 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 issued rwts: total=1443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:07.798 filename0: (groupid=0, jobs=1): err= 0: pid=1697678: Tue Apr 23 21:35:01 2024 00:34:07.798 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(86.0MiB/5038msec) 00:34:07.798 slat (nsec): min=5991, max=25471, avg=7999.02, stdev=2233.04 00:34:07.798 clat (usec): min=6422, max=94051, avg=21952.29, stdev=20204.48 00:34:07.798 lat (usec): min=6429, max=94058, avg=21960.29, stdev=20204.68 00:34:07.798 clat percentiles (usec): 00:34:07.798 | 1.00th=[ 6849], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8455], 00:34:07.798 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10552], 00:34:07.798 | 70.00th=[12780], 80.00th=[50070], 90.00th=[51643], 95.00th=[52691], 00:34:07.798 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:34:07.798 | 99.99th=[93848] 00:34:07.798 bw ( KiB/s): min=10752, max=31744, per=18.79%, avg=17538.90, stdev=6289.19, samples=10 00:34:07.798 iops : min= 84, max= 248, avg=137.00, stdev=49.15, samples=10 00:34:07.798 lat (msec) : 10=51.74%, 20=18.75%, 50=7.70%, 100=21.80% 00:34:07.798 cpu : usr=97.64%, sys=2.10%, ctx=6, majf=0, minf=1637 00:34:07.798 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:07.798 filename0: (groupid=0, jobs=1): err= 0: pid=1697679: Tue Apr 23 21:35:01 2024 00:34:07.798 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(193MiB/5033msec) 00:34:07.798 slat (nsec): min=5956, max=43004, avg=7896.53, stdev=2476.68 00:34:07.798 clat (usec): min=4218, max=92185, avg=9773.80, stdev=11059.26 00:34:07.798 lat (usec): min=4224, max=92194, avg=9781.69, stdev=11059.39 00:34:07.798 clat percentiles (usec): 00:34:07.798 | 1.00th=[ 4490], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5342], 00:34:07.798 | 30.00th=[ 5800], 40.00th=[ 6456], 50.00th=[ 6783], 60.00th=[ 7111], 00:34:07.798 | 70.00th=[ 7635], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[47973], 00:34:07.798 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[91751], 00:34:07.798 | 99.99th=[91751] 00:34:07.798 bw ( KiB/s): min=24320, max=47872, per=42.23%, avg=39424.00, stdev=8280.41, samples=10 00:34:07.798 iops : min= 190, max= 374, avg=308.00, stdev=64.69, samples=10 00:34:07.798 lat (msec) : 10=91.38%, 20=1.49%, 50=6.55%, 100=0.58% 00:34:07.798 cpu : usr=96.18%, sys=3.44%, ctx=7, majf=0, minf=1636 00:34:07.798 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.798 issued rwts: total=1543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.798 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:07.798 00:34:07.798 Run status group 0 (all jobs): 00:34:07.798 READ: bw=91.2MiB/s (95.6MB/s), 17.1MiB/s-38.3MiB/s (17.9MB/s-40.2MB/s), io=459MiB (482MB), run=5003-5038msec 00:34:08.364 ----------------------------------------------------- 00:34:08.364 Suppressions used: 00:34:08.364 count bytes template 00:34:08.364 5 44 /usr/src/fio/parse.c 00:34:08.364 1 8 libtcmalloc_minimal.so 00:34:08.364 1 904 libcrypto.so 00:34:08.364 ----------------------------------------------------- 00:34:08.364 00:34:08.364 21:35:02 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:08.364 21:35:02 -- target/dif.sh@43 -- # local sub 00:34:08.364 21:35:02 -- target/dif.sh@45 -- # for sub in "$@" 00:34:08.364 21:35:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:08.364 21:35:02 -- target/dif.sh@36 -- # local sub_id=0 00:34:08.364 21:35:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.364 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.364 21:35:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.364 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # bs=4k 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # numjobs=8 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # iodepth=16 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # runtime= 00:34:08.364 21:35:02 -- target/dif.sh@109 -- # files=2 00:34:08.364 21:35:02 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:08.364 21:35:02 -- target/dif.sh@28 -- # local sub 00:34:08.364 21:35:02 -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.364 21:35:02 -- target/dif.sh@31 -- # create_subsystem 0 00:34:08.364 21:35:02 -- target/dif.sh@18 -- # local sub_id=0 00:34:08.364 21:35:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.364 bdev_null0 00:34:08.364 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.364 21:35:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.364 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.364 21:35:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.364 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.364 21:35:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.364 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.364 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 [2024-04-23 21:35:02.600843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.365 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.365 21:35:02 -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.365 21:35:02 -- target/dif.sh@31 -- # create_subsystem 1 00:34:08.365 21:35:02 -- target/dif.sh@18 -- # local sub_id=1 00:34:08.365 21:35:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:08.365 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.365 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 bdev_null1 00:34:08.365 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.365 21:35:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:08.365 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.365 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.365 21:35:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:08.365 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.365 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.365 21:35:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.365 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.365 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.365 21:35:02 -- target/dif.sh@30 -- # for sub in "$@" 00:34:08.365 21:35:02 -- target/dif.sh@31 -- # create_subsystem 2 00:34:08.365 21:35:02 -- target/dif.sh@18 -- # local sub_id=2 00:34:08.365 21:35:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:08.365 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.365 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.624 bdev_null2 00:34:08.624 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.624 21:35:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:08.624 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.624 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.624 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.624 21:35:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:08.624 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.624 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.624 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.624 21:35:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:08.624 21:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:08.624 21:35:02 -- common/autotest_common.sh@10 -- # set +x 00:34:08.624 21:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:08.624 21:35:02 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:08.624 21:35:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.624 21:35:02 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.624 21:35:02 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:08.624 21:35:02 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:08.624 21:35:02 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:08.624 21:35:02 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:08.624 21:35:02 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.624 21:35:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:08.624 21:35:02 -- common/autotest_common.sh@1327 -- # shift 00:34:08.624 21:35:02 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:08.624 21:35:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:08.624 21:35:02 -- nvmf/common.sh@521 -- # config=() 00:34:08.624 21:35:02 -- nvmf/common.sh@521 -- # local subsystem config 00:34:08.624 21:35:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:08.624 21:35:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:08.624 { 00:34:08.624 "params": { 00:34:08.624 "name": "Nvme$subsystem", 00:34:08.624 "trtype": "$TEST_TRANSPORT", 00:34:08.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.624 "adrfam": "ipv4", 00:34:08.624 "trsvcid": "$NVMF_PORT", 00:34:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.624 "hdgst": ${hdgst:-false}, 00:34:08.624 "ddgst": ${ddgst:-false} 00:34:08.624 }, 00:34:08.624 "method": "bdev_nvme_attach_controller" 00:34:08.624 } 00:34:08.624 EOF 00:34:08.624 )") 00:34:08.624 21:35:02 -- target/dif.sh@82 -- # gen_fio_conf 00:34:08.624 21:35:02 -- target/dif.sh@54 -- # local file 00:34:08.624 21:35:02 -- target/dif.sh@56 -- # cat 00:34:08.624 21:35:02 -- nvmf/common.sh@543 -- # cat 00:34:08.624 21:35:02 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:08.624 21:35:02 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:08.624 21:35:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:08.624 21:35:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:08.624 21:35:02 -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.624 21:35:02 -- target/dif.sh@73 -- # cat 00:34:08.624 21:35:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:08.624 21:35:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:08.624 { 00:34:08.624 "params": { 00:34:08.624 "name": "Nvme$subsystem", 00:34:08.624 "trtype": "$TEST_TRANSPORT", 00:34:08.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.624 "adrfam": "ipv4", 00:34:08.624 "trsvcid": "$NVMF_PORT", 00:34:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.624 "hdgst": ${hdgst:-false}, 00:34:08.624 "ddgst": ${ddgst:-false} 00:34:08.624 }, 00:34:08.624 "method": "bdev_nvme_attach_controller" 00:34:08.624 } 00:34:08.624 EOF 00:34:08.624 )") 00:34:08.624 21:35:02 -- nvmf/common.sh@543 -- # cat 00:34:08.624 21:35:02 -- target/dif.sh@72 -- # (( file++ )) 00:34:08.624 21:35:02 -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.624 21:35:02 -- target/dif.sh@73 -- # cat 00:34:08.624 21:35:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:08.624 21:35:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:08.624 { 00:34:08.624 "params": { 00:34:08.624 "name": "Nvme$subsystem", 00:34:08.624 "trtype": "$TEST_TRANSPORT", 00:34:08.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.624 "adrfam": "ipv4", 00:34:08.624 "trsvcid": "$NVMF_PORT", 00:34:08.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.624 "hdgst": ${hdgst:-false}, 00:34:08.625 "ddgst": ${ddgst:-false} 00:34:08.625 }, 00:34:08.625 "method": "bdev_nvme_attach_controller" 00:34:08.625 } 00:34:08.625 EOF 00:34:08.625 )") 00:34:08.625 21:35:02 -- target/dif.sh@72 -- # (( file++ )) 00:34:08.625 21:35:02 -- target/dif.sh@72 -- # (( file <= files )) 00:34:08.625 21:35:02 -- nvmf/common.sh@543 -- # cat 00:34:08.625 21:35:02 -- nvmf/common.sh@545 -- # jq . 00:34:08.625 21:35:02 -- nvmf/common.sh@546 -- # IFS=, 00:34:08.625 21:35:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:08.625 "params": { 00:34:08.625 "name": "Nvme0", 00:34:08.625 "trtype": "tcp", 00:34:08.625 "traddr": "10.0.0.2", 00:34:08.625 "adrfam": "ipv4", 00:34:08.625 "trsvcid": "4420", 00:34:08.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.625 "hdgst": false, 00:34:08.625 "ddgst": false 00:34:08.625 }, 00:34:08.625 "method": "bdev_nvme_attach_controller" 00:34:08.625 },{ 00:34:08.625 "params": { 00:34:08.625 "name": "Nvme1", 00:34:08.625 "trtype": "tcp", 00:34:08.625 "traddr": "10.0.0.2", 00:34:08.625 "adrfam": "ipv4", 00:34:08.625 "trsvcid": "4420", 00:34:08.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.625 "hdgst": false, 00:34:08.625 "ddgst": false 00:34:08.625 }, 00:34:08.625 "method": "bdev_nvme_attach_controller" 00:34:08.625 },{ 00:34:08.625 "params": { 00:34:08.625 "name": "Nvme2", 00:34:08.625 "trtype": "tcp", 00:34:08.625 "traddr": "10.0.0.2", 00:34:08.625 "adrfam": "ipv4", 00:34:08.625 "trsvcid": "4420", 00:34:08.625 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:08.625 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:08.625 "hdgst": false, 00:34:08.625 "ddgst": false 00:34:08.625 }, 00:34:08.625 "method": "bdev_nvme_attach_controller" 00:34:08.625 }' 00:34:08.625 21:35:02 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:08.625 21:35:02 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:08.625 21:35:02 -- common/autotest_common.sh@1333 -- # break 00:34:08.625 21:35:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:08.625 21:35:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:08.883 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.883 ... 00:34:08.883 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.883 ... 00:34:08.883 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:08.883 ... 00:34:08.883 fio-3.35 00:34:08.883 Starting 24 threads 00:34:08.883 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.080 00:34:21.080 filename0: (groupid=0, jobs=1): err= 0: pid=1699416: Tue Apr 23 21:35:14 2024 00:34:21.080 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:34:21.080 slat (usec): min=5, max=494, avg=35.33, stdev=17.18 00:34:21.080 clat (usec): min=22887, max=67566, avg=33387.41, stdev=2140.78 00:34:21.080 lat (usec): min=22896, max=67595, avg=33422.75, stdev=2139.10 00:34:21.080 clat percentiles (usec): 00:34:21.080 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:34:21.080 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:21.080 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.080 | 99.00th=[41681], 99.50th=[47973], 99.90th=[67634], 99.95th=[67634], 00:34:21.080 | 99.99th=[67634] 00:34:21.080 bw ( KiB/s): min= 1664, max= 2048, per=4.05%, avg=1899.79, stdev=77.07, samples=19 00:34:21.080 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:34:21.080 lat (msec) : 50=99.66%, 100=0.34% 00:34:21.080 cpu : usr=91.53%, sys=4.23%, ctx=105, majf=0, minf=1635 00:34:21.080 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.080 filename0: (groupid=0, jobs=1): err= 0: pid=1699417: Tue Apr 23 21:35:14 2024 00:34:21.080 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10021msec) 00:34:21.080 slat (nsec): min=4404, max=55314, avg=10195.54, stdev=4922.78 00:34:21.080 clat (usec): min=948, max=53021, avg=29580.52, stdev=6231.00 00:34:21.080 lat (usec): min=955, max=53030, avg=29590.72, stdev=6231.48 00:34:21.080 clat percentiles (usec): 00:34:21.080 | 1.00th=[ 8455], 5.00th=[19792], 10.00th=[20841], 20.00th=[22414], 00:34:21.080 | 30.00th=[26084], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:34:21.080 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:34:21.080 | 99.00th=[47449], 99.50th=[48497], 99.90th=[51119], 99.95th=[51119], 00:34:21.080 | 99.99th=[53216] 00:34:21.080 bw ( KiB/s): min= 1792, max= 3104, per=4.60%, avg=2156.00, stdev=415.35, samples=20 00:34:21.080 iops : min= 448, max= 776, avg=539.00, stdev=103.84, samples=20 00:34:21.080 lat (usec) : 1000=0.07% 00:34:21.080 lat (msec) : 4=0.30%, 10=0.65%, 20=5.38%, 50=93.30%, 100=0.30% 00:34:21.080 cpu : usr=98.64%, sys=0.90%, ctx=36, majf=0, minf=1636 00:34:21.080 IO depths : 1=3.5%, 2=7.7%, 4=18.9%, 8=60.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:21.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 issued rwts: total=5406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.080 filename0: (groupid=0, jobs=1): err= 0: pid=1699418: Tue Apr 23 21:35:14 2024 00:34:21.080 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10027msec) 00:34:21.080 slat (nsec): min=5904, max=70278, avg=15529.18, stdev=9717.21 00:34:21.080 clat (usec): min=18228, max=50279, avg=33294.82, stdev=1893.75 00:34:21.080 lat (usec): min=18236, max=50299, avg=33310.35, stdev=1893.92 00:34:21.080 clat percentiles (usec): 00:34:21.080 | 1.00th=[26084], 5.00th=[31851], 10.00th=[32375], 20.00th=[32900], 00:34:21.080 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.080 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.080 | 99.00th=[40633], 99.50th=[43254], 99.90th=[50070], 99.95th=[50070], 00:34:21.080 | 99.99th=[50070] 00:34:21.080 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.60, stdev=50.44, samples=20 00:34:21.080 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:34:21.080 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:34:21.080 cpu : usr=98.56%, sys=0.91%, ctx=75, majf=0, minf=1636 00:34:21.080 IO depths : 1=6.0%, 2=12.1%, 4=24.8%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:21.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.080 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.080 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.080 filename0: (groupid=0, jobs=1): err= 0: pid=1699419: Tue Apr 23 21:35:14 2024 00:34:21.080 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.8MiB/10019msec) 00:34:21.080 slat (nsec): min=5646, max=63258, avg=21071.70, stdev=10462.69 00:34:21.081 clat (usec): min=18068, max=41695, avg=33196.31, stdev=1230.78 00:34:21.081 lat (usec): min=18077, max=41722, avg=33217.38, stdev=1231.50 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[30540], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:34:21.081 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.081 | 99.00th=[35914], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:34:21.081 | 99.99th=[41681] 00:34:21.081 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.75, stdev=50.46, samples=20 00:34:21.081 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:34:21.081 lat (msec) : 20=0.25%, 50=99.75% 00:34:21.081 cpu : usr=94.98%, sys=2.51%, ctx=78, majf=0, minf=1634 00:34:21.081 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename0: (groupid=0, jobs=1): err= 0: pid=1699420: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10020msec) 00:34:21.081 slat (nsec): min=7820, max=62924, avg=18811.57, stdev=10071.61 00:34:21.081 clat (usec): min=17269, max=68709, avg=32901.47, stdev=3273.00 00:34:21.081 lat (usec): min=17277, max=68745, avg=32920.29, stdev=3274.33 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[20579], 5.00th=[26870], 10.00th=[31851], 20.00th=[32637], 00:34:21.081 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.081 | 99.00th=[38536], 99.50th=[44303], 99.90th=[68682], 99.95th=[68682], 00:34:21.081 | 99.99th=[68682] 00:34:21.081 bw ( KiB/s): min= 1664, max= 2144, per=4.12%, avg=1933.05, stdev=99.55, samples=20 00:34:21.081 iops : min= 416, max= 536, avg=483.25, stdev=24.89, samples=20 00:34:21.081 lat (msec) : 20=0.41%, 50=99.26%, 100=0.33% 00:34:21.081 cpu : usr=98.45%, sys=1.03%, ctx=18, majf=0, minf=1634 00:34:21.081 IO depths : 1=4.5%, 2=9.6%, 4=20.5%, 8=56.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename0: (groupid=0, jobs=1): err= 0: pid=1699421: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10006msec) 00:34:21.081 slat (usec): min=4, max=485, avg=25.70, stdev=15.15 00:34:21.081 clat (usec): min=21879, max=65330, avg=33265.70, stdev=1288.28 00:34:21.081 lat (usec): min=21888, max=65354, avg=33291.40, stdev=1286.93 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.081 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.081 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:34:21.081 | 99.99th=[65274] 00:34:21.081 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.05, stdev=51.77, samples=19 00:34:21.081 iops : min= 448, max= 512, avg=478.26, stdev=12.94, samples=19 00:34:21.081 lat (msec) : 50=99.96%, 100=0.04% 00:34:21.081 cpu : usr=90.02%, sys=4.72%, ctx=287, majf=0, minf=1634 00:34:21.081 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename0: (groupid=0, jobs=1): err= 0: pid=1699422: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10015msec) 00:34:21.081 slat (usec): min=5, max=495, avg=28.27, stdev=19.02 00:34:21.081 clat (usec): min=16852, max=67770, avg=33782.56, stdev=5656.38 00:34:21.081 lat (usec): min=16898, max=67795, avg=33810.83, stdev=5655.85 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[17695], 5.00th=[24249], 10.00th=[31851], 20.00th=[32637], 00:34:21.081 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:21.081 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38536], 95.00th=[47449], 00:34:21.081 | 99.00th=[53740], 99.50th=[53740], 99.90th=[67634], 99.95th=[67634], 00:34:21.081 | 99.99th=[67634] 00:34:21.081 bw ( KiB/s): min= 1635, max= 2064, per=4.00%, avg=1878.55, stdev=94.80, samples=20 00:34:21.081 iops : min= 408, max= 516, avg=469.60, stdev=23.80, samples=20 00:34:21.081 lat (msec) : 20=2.48%, 50=95.90%, 100=1.61% 00:34:21.081 cpu : usr=98.60%, sys=0.97%, ctx=13, majf=0, minf=1634 00:34:21.081 IO depths : 1=3.7%, 2=7.9%, 4=20.4%, 8=58.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename0: (groupid=0, jobs=1): err= 0: pid=1699423: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10014msec) 00:34:21.081 slat (usec): min=4, max=495, avg=35.65, stdev=16.83 00:34:21.081 clat (usec): min=18938, max=67128, avg=33145.33, stdev=2330.24 00:34:21.081 lat (usec): min=18949, max=67148, avg=33180.98, stdev=2329.90 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.081 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.081 | 99.00th=[35390], 99.50th=[35914], 99.90th=[66847], 99.95th=[67634], 00:34:21.081 | 99.99th=[67634] 00:34:21.081 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1906.53, stdev=72.59, samples=19 00:34:21.081 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:34:21.081 lat (msec) : 20=0.23%, 50=99.44%, 100=0.33% 00:34:21.081 cpu : usr=98.60%, sys=0.90%, ctx=60, majf=0, minf=1635 00:34:21.081 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename1: (groupid=0, jobs=1): err= 0: pid=1699424: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:34:21.081 slat (usec): min=3, max=498, avg=36.10, stdev=16.87 00:34:21.081 clat (usec): min=20016, max=67607, avg=33146.48, stdev=2312.88 00:34:21.081 lat (usec): min=20030, max=67627, avg=33182.58, stdev=2312.44 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.081 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.081 | 99.00th=[35390], 99.50th=[35914], 99.90th=[67634], 99.95th=[67634], 00:34:21.081 | 99.99th=[67634] 00:34:21.081 bw ( KiB/s): min= 1667, max= 2048, per=4.06%, avg=1907.35, stdev=70.18, samples=20 00:34:21.081 iops : min= 416, max= 512, avg=476.80, stdev=17.68, samples=20 00:34:21.081 lat (msec) : 50=99.67%, 100=0.33% 00:34:21.081 cpu : usr=98.42%, sys=1.03%, ctx=68, majf=0, minf=1635 00:34:21.081 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename1: (groupid=0, jobs=1): err= 0: pid=1699425: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=487, BW=1951KiB/s (1997kB/s)(19.1MiB/10011msec) 00:34:21.081 slat (nsec): min=5138, max=63321, avg=13915.32, stdev=7212.26 00:34:21.081 clat (usec): min=12904, max=93742, avg=32685.88, stdev=4000.10 00:34:21.081 lat (usec): min=12916, max=93767, avg=32699.80, stdev=4000.54 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[19792], 5.00th=[25822], 10.00th=[31851], 20.00th=[32637], 00:34:21.081 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:21.081 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.081 | 99.00th=[37487], 99.50th=[46924], 99.90th=[73925], 99.95th=[74974], 00:34:21.081 | 99.99th=[93848] 00:34:21.081 bw ( KiB/s): min= 1715, max= 2352, per=4.14%, avg=1941.21, stdev=118.10, samples=19 00:34:21.081 iops : min= 428, max= 588, avg=485.26, stdev=29.61, samples=19 00:34:21.081 lat (msec) : 20=1.15%, 50=98.36%, 100=0.49% 00:34:21.081 cpu : usr=98.67%, sys=0.86%, ctx=73, majf=0, minf=1635 00:34:21.081 IO depths : 1=5.3%, 2=10.7%, 4=22.0%, 8=54.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:21.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.081 issued rwts: total=4882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.081 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.081 filename1: (groupid=0, jobs=1): err= 0: pid=1699426: Tue Apr 23 21:35:14 2024 00:34:21.081 read: IOPS=715, BW=2863KiB/s (2932kB/s)(28.0MiB/10019msec) 00:34:21.081 slat (nsec): min=3810, max=33070, avg=9445.42, stdev=1543.67 00:34:21.081 clat (usec): min=6279, max=40464, avg=22290.11, stdev=3001.84 00:34:21.081 lat (usec): min=6287, max=40473, avg=22299.56, stdev=3001.81 00:34:21.081 clat percentiles (usec): 00:34:21.081 | 1.00th=[12518], 5.00th=[19792], 10.00th=[20055], 20.00th=[20579], 00:34:21.081 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:34:21.081 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:21.081 | 99.00th=[32637], 99.50th=[33817], 99.90th=[36963], 99.95th=[40633], 00:34:21.081 | 99.99th=[40633] 00:34:21.081 bw ( KiB/s): min= 2736, max= 2960, per=6.10%, avg=2864.80, stdev=53.06, samples=20 00:34:21.082 iops : min= 684, max= 740, avg=716.20, stdev=13.26, samples=20 00:34:21.082 lat (msec) : 10=0.32%, 20=8.27%, 50=91.41% 00:34:21.082 cpu : usr=98.26%, sys=1.17%, ctx=105, majf=0, minf=1632 00:34:21.082 IO depths : 1=0.1%, 2=0.1%, 4=6.2%, 8=81.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=88.9%, 8=5.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=7172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename1: (groupid=0, jobs=1): err= 0: pid=1699427: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=464, BW=1859KiB/s (1903kB/s)(18.2MiB/10016msec) 00:34:21.082 slat (usec): min=4, max=488, avg=28.43, stdev=19.07 00:34:21.082 clat (usec): min=17118, max=81362, avg=34210.60, stdev=4929.00 00:34:21.082 lat (usec): min=17129, max=81384, avg=34239.03, stdev=4928.20 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[23462], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.082 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.082 | 70.00th=[33817], 80.00th=[34341], 90.00th=[36439], 95.00th=[43254], 00:34:21.082 | 99.00th=[49546], 99.50th=[54789], 99.90th=[81265], 99.95th=[81265], 00:34:21.082 | 99.99th=[81265] 00:34:21.082 bw ( KiB/s): min= 1536, max= 1968, per=3.96%, avg=1856.40, stdev=101.01, samples=20 00:34:21.082 iops : min= 384, max= 492, avg=464.10, stdev=25.25, samples=20 00:34:21.082 lat (msec) : 20=0.60%, 50=98.62%, 100=0.77% 00:34:21.082 cpu : usr=98.35%, sys=1.22%, ctx=22, majf=0, minf=1632 00:34:21.082 IO depths : 1=3.1%, 2=6.4%, 4=18.4%, 8=61.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=93.2%, 8=2.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename1: (groupid=0, jobs=1): err= 0: pid=1699428: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10014msec) 00:34:21.082 slat (usec): min=4, max=525, avg=31.64, stdev=19.06 00:34:21.082 clat (usec): min=13865, max=66831, avg=32522.78, stdev=4195.69 00:34:21.082 lat (usec): min=13873, max=66852, avg=32554.42, stdev=4199.31 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[20055], 5.00th=[23200], 10.00th=[28443], 20.00th=[32375], 00:34:21.082 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:21.082 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.082 | 99.00th=[45876], 99.50th=[51643], 99.90th=[66847], 99.95th=[66847], 00:34:21.082 | 99.99th=[66847] 00:34:21.082 bw ( KiB/s): min= 1664, max= 2272, per=4.14%, avg=1944.42, stdev=128.48, samples=19 00:34:21.082 iops : min= 416, max= 568, avg=486.11, stdev=32.12, samples=19 00:34:21.082 lat (msec) : 20=0.78%, 50=98.69%, 100=0.53% 00:34:21.082 cpu : usr=98.68%, sys=0.86%, ctx=58, majf=0, minf=1634 00:34:21.082 IO depths : 1=3.3%, 2=8.5%, 4=21.6%, 8=57.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename1: (groupid=0, jobs=1): err= 0: pid=1699429: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10017msec) 00:34:21.082 slat (nsec): min=6238, max=53700, avg=14117.81, stdev=7365.41 00:34:21.082 clat (usec): min=16179, max=61247, avg=33371.73, stdev=2943.49 00:34:21.082 lat (usec): min=16198, max=61275, avg=33385.85, stdev=2943.39 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[18482], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:34:21.082 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.082 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.082 | 99.00th=[50070], 99.50th=[52167], 99.90th=[61080], 99.95th=[61080], 00:34:21.082 | 99.99th=[61080] 00:34:21.082 bw ( KiB/s): min= 1776, max= 2048, per=4.06%, avg=1907.20, stdev=59.55, samples=20 00:34:21.082 iops : min= 444, max= 512, avg=476.80, stdev=14.89, samples=20 00:34:21.082 lat (msec) : 20=1.02%, 50=98.10%, 100=0.88% 00:34:21.082 cpu : usr=91.82%, sys=3.65%, ctx=235, majf=0, minf=1637 00:34:21.082 IO depths : 1=5.3%, 2=11.4%, 4=24.5%, 8=51.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename1: (groupid=0, jobs=1): err= 0: pid=1699430: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10022msec) 00:34:21.082 slat (usec): min=7, max=168, avg=22.29, stdev=11.44 00:34:21.082 clat (usec): min=22570, max=68725, avg=33319.14, stdev=2445.76 00:34:21.082 lat (usec): min=22586, max=68787, avg=33341.44, stdev=2445.66 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[27395], 5.00th=[31851], 10.00th=[32375], 20.00th=[32900], 00:34:21.082 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:34:21.082 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.082 | 99.00th=[39060], 99.50th=[41681], 99.90th=[68682], 99.95th=[68682], 00:34:21.082 | 99.99th=[68682] 00:34:21.082 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1905.85, stdev=70.72, samples=20 00:34:21.082 iops : min= 416, max= 512, avg=476.45, stdev=17.68, samples=20 00:34:21.082 lat (msec) : 50=99.62%, 100=0.38% 00:34:21.082 cpu : usr=95.21%, sys=2.40%, ctx=43, majf=0, minf=1636 00:34:21.082 IO depths : 1=5.8%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename1: (groupid=0, jobs=1): err= 0: pid=1699431: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10022msec) 00:34:21.082 slat (nsec): min=5183, max=49494, avg=12947.12, stdev=6699.99 00:34:21.082 clat (usec): min=17720, max=73149, avg=33487.16, stdev=2715.23 00:34:21.082 lat (usec): min=17730, max=73175, avg=33500.11, stdev=2714.99 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:34:21.082 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.082 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.082 | 99.00th=[35390], 99.50th=[52167], 99.90th=[72877], 99.95th=[72877], 00:34:21.082 | 99.99th=[72877] 00:34:21.082 bw ( KiB/s): min= 1664, max= 1936, per=4.05%, avg=1899.79, stdev=63.07, samples=19 00:34:21.082 iops : min= 416, max= 484, avg=474.95, stdev=15.77, samples=19 00:34:21.082 lat (msec) : 20=0.17%, 50=99.33%, 100=0.50% 00:34:21.082 cpu : usr=98.57%, sys=0.93%, ctx=43, majf=0, minf=1636 00:34:21.082 IO depths : 1=2.1%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename2: (groupid=0, jobs=1): err= 0: pid=1699432: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.7MiB/10018msec) 00:34:21.082 slat (usec): min=4, max=498, avg=34.78, stdev=16.54 00:34:21.082 clat (usec): min=20236, max=71109, avg=33228.85, stdev=2492.46 00:34:21.082 lat (usec): min=20245, max=71137, avg=33263.63, stdev=2491.52 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.082 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:21.082 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.082 | 99.00th=[35390], 99.50th=[35914], 99.90th=[70779], 99.95th=[70779], 00:34:21.082 | 99.99th=[70779] 00:34:21.082 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1906.80, stdev=70.67, samples=20 00:34:21.082 iops : min= 416, max= 512, avg=476.70, stdev=17.67, samples=20 00:34:21.082 lat (msec) : 50=99.67%, 100=0.33% 00:34:21.082 cpu : usr=97.23%, sys=1.45%, ctx=53, majf=0, minf=1636 00:34:21.082 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename2: (groupid=0, jobs=1): err= 0: pid=1699433: Tue Apr 23 21:35:14 2024 00:34:21.082 read: IOPS=434, BW=1738KiB/s (1780kB/s)(17.0MiB/10011msec) 00:34:21.082 slat (usec): min=4, max=502, avg=23.71, stdev=18.13 00:34:21.082 clat (usec): min=13842, max=89205, avg=36680.95, stdev=6479.10 00:34:21.082 lat (usec): min=13851, max=89226, avg=36704.67, stdev=6477.99 00:34:21.082 clat percentiles (usec): 00:34:21.082 | 1.00th=[24249], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:34:21.082 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:21.082 | 70.00th=[38536], 80.00th=[42730], 90.00th=[45876], 95.00th=[48497], 00:34:21.082 | 99.00th=[50070], 99.50th=[53216], 99.90th=[89654], 99.95th=[89654], 00:34:21.082 | 99.99th=[89654] 00:34:21.082 bw ( KiB/s): min= 1408, max= 1920, per=3.69%, avg=1729.84, stdev=211.80, samples=19 00:34:21.082 iops : min= 352, max= 480, avg=432.42, stdev=52.96, samples=19 00:34:21.082 lat (msec) : 20=0.71%, 50=98.05%, 100=1.24% 00:34:21.082 cpu : usr=98.22%, sys=1.23%, ctx=120, majf=0, minf=1634 00:34:21.082 IO depths : 1=1.1%, 2=2.1%, 4=10.4%, 8=71.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:34:21.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 complete : 0=0.0%, 4=91.7%, 8=5.5%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.082 issued rwts: total=4350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.082 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.082 filename2: (groupid=0, jobs=1): err= 0: pid=1699434: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10020msec) 00:34:21.083 slat (nsec): min=8010, max=62704, avg=21981.87, stdev=10332.42 00:34:21.083 clat (usec): min=22522, max=68776, avg=33313.78, stdev=2321.84 00:34:21.083 lat (usec): min=22530, max=68814, avg=33335.76, stdev=2321.57 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[30540], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:34:21.083 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.083 | 99.00th=[35914], 99.50th=[40109], 99.90th=[68682], 99.95th=[68682], 00:34:21.083 | 99.99th=[68682] 00:34:21.083 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1905.85, stdev=70.72, samples=20 00:34:21.083 iops : min= 416, max= 512, avg=476.45, stdev=17.68, samples=20 00:34:21.083 lat (msec) : 50=99.67%, 100=0.33% 00:34:21.083 cpu : usr=98.83%, sys=0.74%, ctx=15, majf=0, minf=1636 00:34:21.083 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 filename2: (groupid=0, jobs=1): err= 0: pid=1699435: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10001msec) 00:34:21.083 slat (nsec): min=5487, max=80030, avg=16416.49, stdev=13059.09 00:34:21.083 clat (usec): min=20768, max=61108, avg=33322.80, stdev=1330.24 00:34:21.083 lat (usec): min=20780, max=61122, avg=33339.21, stdev=1330.37 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:34:21.083 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.083 | 99.00th=[35914], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:34:21.083 | 99.99th=[61080] 00:34:21.083 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.42, stdev=51.41, samples=19 00:34:21.083 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:34:21.083 lat (msec) : 50=99.96%, 100=0.04% 00:34:21.083 cpu : usr=95.22%, sys=2.29%, ctx=47, majf=0, minf=1636 00:34:21.083 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 filename2: (groupid=0, jobs=1): err= 0: pid=1699436: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10026msec) 00:34:21.083 slat (usec): min=3, max=498, avg=27.52, stdev=17.76 00:34:21.083 clat (usec): min=13422, max=53063, avg=33222.85, stdev=2388.54 00:34:21.083 lat (usec): min=13431, max=53085, avg=33250.37, stdev=2390.24 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[23462], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.083 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.083 | 99.00th=[43254], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:34:21.083 | 99.99th=[53216] 00:34:21.083 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1913.60, stdev=50.44, samples=20 00:34:21.083 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:34:21.083 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:34:21.083 cpu : usr=98.65%, sys=0.84%, ctx=84, majf=0, minf=1636 00:34:21.083 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 filename2: (groupid=0, jobs=1): err= 0: pid=1699437: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:34:21.083 slat (usec): min=5, max=492, avg=34.20, stdev=17.39 00:34:21.083 clat (usec): min=18765, max=64880, avg=33213.17, stdev=2296.60 00:34:21.083 lat (usec): min=18784, max=64906, avg=33247.37, stdev=2295.77 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[29230], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.083 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:34:21.083 | 99.00th=[36963], 99.50th=[38011], 99.90th=[64750], 99.95th=[64750], 00:34:21.083 | 99.99th=[64750] 00:34:21.083 bw ( KiB/s): min= 1664, max= 2032, per=4.06%, avg=1905.68, stdev=71.33, samples=19 00:34:21.083 iops : min= 416, max= 508, avg=476.42, stdev=17.83, samples=19 00:34:21.083 lat (msec) : 20=0.29%, 50=99.33%, 100=0.38% 00:34:21.083 cpu : usr=98.84%, sys=0.75%, ctx=12, majf=0, minf=1636 00:34:21.083 IO depths : 1=0.7%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 filename2: (groupid=0, jobs=1): err= 0: pid=1699439: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10020msec) 00:34:21.083 slat (usec): min=4, max=146, avg=14.40, stdev= 9.42 00:34:21.083 clat (usec): min=11945, max=41667, avg=32288.95, stdev=3308.39 00:34:21.083 lat (usec): min=11954, max=41686, avg=32303.36, stdev=3309.67 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[20841], 5.00th=[22676], 10.00th=[30016], 20.00th=[32637], 00:34:21.083 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.083 | 99.00th=[35914], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:34:21.083 | 99.99th=[41681] 00:34:21.083 bw ( KiB/s): min= 1808, max= 2560, per=4.20%, avg=1972.00, stdev=169.32, samples=20 00:34:21.083 iops : min= 452, max= 640, avg=493.00, stdev=42.33, samples=20 00:34:21.083 lat (msec) : 20=0.65%, 50=99.35% 00:34:21.083 cpu : usr=91.81%, sys=3.80%, ctx=125, majf=0, minf=1637 00:34:21.083 IO depths : 1=5.3%, 2=11.0%, 4=23.2%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 filename2: (groupid=0, jobs=1): err= 0: pid=1699440: Tue Apr 23 21:35:14 2024 00:34:21.083 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:34:21.083 slat (usec): min=4, max=513, avg=35.78, stdev=16.25 00:34:21.083 clat (usec): min=18760, max=79571, avg=33158.19, stdev=2488.74 00:34:21.083 lat (usec): min=18783, max=79591, avg=33193.98, stdev=2488.23 00:34:21.083 clat percentiles (usec): 00:34:21.083 | 1.00th=[30540], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:34:21.083 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:34:21.083 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:34:21.083 | 99.00th=[35390], 99.50th=[35914], 99.90th=[68682], 99.95th=[68682], 00:34:21.083 | 99.99th=[79168] 00:34:21.083 bw ( KiB/s): min= 1664, max= 2048, per=4.06%, avg=1907.20, stdev=70.72, samples=20 00:34:21.083 iops : min= 416, max= 512, avg=476.80, stdev=17.68, samples=20 00:34:21.083 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:34:21.083 cpu : usr=98.74%, sys=0.83%, ctx=40, majf=0, minf=1634 00:34:21.083 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:21.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.083 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.083 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:21.083 00:34:21.083 Run status group 0 (all jobs): 00:34:21.083 READ: bw=45.8MiB/s (48.0MB/s), 1738KiB/s-2863KiB/s (1780kB/s-2932kB/s), io=459MiB (482MB), run=10001-10027msec 00:34:21.083 ----------------------------------------------------- 00:34:21.083 Suppressions used: 00:34:21.083 count bytes template 00:34:21.083 45 402 /usr/src/fio/parse.c 00:34:21.083 1 8 libtcmalloc_minimal.so 00:34:21.083 1 904 libcrypto.so 00:34:21.083 ----------------------------------------------------- 00:34:21.083 00:34:21.083 21:35:14 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:21.083 21:35:14 -- target/dif.sh@43 -- # local sub 00:34:21.083 21:35:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.083 21:35:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:21.083 21:35:14 -- target/dif.sh@36 -- # local sub_id=0 00:34:21.083 21:35:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.083 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.083 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.083 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.083 21:35:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:21.083 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.083 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.083 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.083 21:35:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.083 21:35:14 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:21.083 21:35:14 -- target/dif.sh@36 -- # local sub_id=1 00:34:21.083 21:35:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.083 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.083 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.083 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.083 21:35:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:21.083 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.083 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.083 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.083 21:35:14 -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.083 21:35:14 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:21.083 21:35:14 -- target/dif.sh@36 -- # local sub_id=2 00:34:21.083 21:35:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # numjobs=2 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # iodepth=8 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # runtime=5 00:34:21.084 21:35:14 -- target/dif.sh@115 -- # files=1 00:34:21.084 21:35:14 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:21.084 21:35:14 -- target/dif.sh@28 -- # local sub 00:34:21.084 21:35:14 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.084 21:35:14 -- target/dif.sh@31 -- # create_subsystem 0 00:34:21.084 21:35:14 -- target/dif.sh@18 -- # local sub_id=0 00:34:21.084 21:35:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 bdev_null0 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 [2024-04-23 21:35:14.806905] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@30 -- # for sub in "$@" 00:34:21.084 21:35:14 -- target/dif.sh@31 -- # create_subsystem 1 00:34:21.084 21:35:14 -- target/dif.sh@18 -- # local sub_id=1 00:34:21.084 21:35:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 bdev_null1 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.084 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.084 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.084 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.084 21:35:14 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:21.084 21:35:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.084 21:35:14 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.084 21:35:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:21.084 21:35:14 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:21.084 21:35:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:21.084 21:35:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:21.084 21:35:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.084 21:35:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:21.084 21:35:14 -- common/autotest_common.sh@1327 -- # shift 00:34:21.084 21:35:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:21.084 21:35:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:21.084 21:35:14 -- nvmf/common.sh@521 -- # config=() 00:34:21.084 21:35:14 -- nvmf/common.sh@521 -- # local subsystem config 00:34:21.084 21:35:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:21.084 21:35:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:21.084 { 00:34:21.084 "params": { 00:34:21.084 "name": "Nvme$subsystem", 00:34:21.084 "trtype": "$TEST_TRANSPORT", 00:34:21.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.084 "adrfam": "ipv4", 00:34:21.084 "trsvcid": "$NVMF_PORT", 00:34:21.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.084 "hdgst": ${hdgst:-false}, 00:34:21.084 "ddgst": ${ddgst:-false} 00:34:21.084 }, 00:34:21.084 "method": "bdev_nvme_attach_controller" 00:34:21.084 } 00:34:21.084 EOF 00:34:21.084 )") 00:34:21.084 21:35:14 -- target/dif.sh@82 -- # gen_fio_conf 00:34:21.084 21:35:14 -- target/dif.sh@54 -- # local file 00:34:21.084 21:35:14 -- target/dif.sh@56 -- # cat 00:34:21.084 21:35:14 -- nvmf/common.sh@543 -- # cat 00:34:21.084 21:35:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:21.084 21:35:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:21.084 21:35:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:21.084 21:35:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:21.084 21:35:14 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.084 21:35:14 -- target/dif.sh@73 -- # cat 00:34:21.084 21:35:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:21.084 21:35:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:21.084 { 00:34:21.084 "params": { 00:34:21.084 "name": "Nvme$subsystem", 00:34:21.084 "trtype": "$TEST_TRANSPORT", 00:34:21.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.084 "adrfam": "ipv4", 00:34:21.084 "trsvcid": "$NVMF_PORT", 00:34:21.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.084 "hdgst": ${hdgst:-false}, 00:34:21.084 "ddgst": ${ddgst:-false} 00:34:21.084 }, 00:34:21.084 "method": "bdev_nvme_attach_controller" 00:34:21.084 } 00:34:21.084 EOF 00:34:21.084 )") 00:34:21.084 21:35:14 -- target/dif.sh@72 -- # (( file++ )) 00:34:21.084 21:35:14 -- target/dif.sh@72 -- # (( file <= files )) 00:34:21.084 21:35:14 -- nvmf/common.sh@543 -- # cat 00:34:21.084 21:35:14 -- nvmf/common.sh@545 -- # jq . 00:34:21.084 21:35:14 -- nvmf/common.sh@546 -- # IFS=, 00:34:21.084 21:35:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:21.084 "params": { 00:34:21.084 "name": "Nvme0", 00:34:21.084 "trtype": "tcp", 00:34:21.084 "traddr": "10.0.0.2", 00:34:21.084 "adrfam": "ipv4", 00:34:21.084 "trsvcid": "4420", 00:34:21.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.084 "hdgst": false, 00:34:21.084 "ddgst": false 00:34:21.084 }, 00:34:21.084 "method": "bdev_nvme_attach_controller" 00:34:21.084 },{ 00:34:21.084 "params": { 00:34:21.084 "name": "Nvme1", 00:34:21.084 "trtype": "tcp", 00:34:21.084 "traddr": "10.0.0.2", 00:34:21.084 "adrfam": "ipv4", 00:34:21.084 "trsvcid": "4420", 00:34:21.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:21.084 "hdgst": false, 00:34:21.084 "ddgst": false 00:34:21.084 }, 00:34:21.084 "method": "bdev_nvme_attach_controller" 00:34:21.084 }' 00:34:21.084 21:35:14 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:21.084 21:35:14 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:21.084 21:35:14 -- common/autotest_common.sh@1333 -- # break 00:34:21.084 21:35:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:21.084 21:35:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:21.084 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:21.084 ... 00:34:21.084 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:21.084 ... 00:34:21.084 fio-3.35 00:34:21.084 Starting 4 threads 00:34:21.084 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.643 00:34:27.643 filename0: (groupid=0, jobs=1): err= 0: pid=1702865: Tue Apr 23 21:35:21 2024 00:34:27.643 read: IOPS=2512, BW=19.6MiB/s (20.6MB/s)(98.2MiB/5002msec) 00:34:27.643 slat (nsec): min=5670, max=79218, avg=8640.02, stdev=3804.69 00:34:27.643 clat (usec): min=677, max=9583, avg=3160.68, stdev=597.32 00:34:27.643 lat (usec): min=685, max=9617, avg=3169.32, stdev=597.34 00:34:27.643 clat percentiles (usec): 00:34:27.643 | 1.00th=[ 1795], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2737], 00:34:27.643 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3163], 60.00th=[ 3228], 00:34:27.643 | 70.00th=[ 3359], 80.00th=[ 3523], 90.00th=[ 3818], 95.00th=[ 4178], 00:34:27.643 | 99.00th=[ 5014], 99.50th=[ 5342], 99.90th=[ 5932], 99.95th=[ 9372], 00:34:27.643 | 99.99th=[ 9503] 00:34:27.643 bw ( KiB/s): min=19200, max=21952, per=26.60%, avg=20040.89, stdev=941.36, samples=9 00:34:27.643 iops : min= 2400, max= 2744, avg=2505.11, stdev=117.67, samples=9 00:34:27.643 lat (usec) : 750=0.01% 00:34:27.643 lat (msec) : 2=2.00%, 4=91.33%, 10=6.66% 00:34:27.643 cpu : usr=96.44%, sys=3.22%, ctx=8, majf=0, minf=1634 00:34:27.643 IO depths : 1=0.1%, 2=3.7%, 4=67.1%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 issued rwts: total=12566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.643 filename0: (groupid=0, jobs=1): err= 0: pid=1702866: Tue Apr 23 21:35:21 2024 00:34:27.643 read: IOPS=2256, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5002msec) 00:34:27.643 slat (nsec): min=5691, max=79039, avg=9454.04, stdev=4237.48 00:34:27.643 clat (usec): min=1019, max=49475, avg=3519.62, stdev=1383.70 00:34:27.643 lat (usec): min=1025, max=49500, avg=3529.07, stdev=1383.71 00:34:27.643 clat percentiles (usec): 00:34:27.643 | 1.00th=[ 2376], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2999], 00:34:27.643 | 30.00th=[ 3130], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3458], 00:34:27.643 | 70.00th=[ 3621], 80.00th=[ 3884], 90.00th=[ 4490], 95.00th=[ 4817], 00:34:27.643 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 7242], 99.95th=[49546], 00:34:27.643 | 99.99th=[49546] 00:34:27.643 bw ( KiB/s): min=16816, max=18896, per=23.89%, avg=17992.89, stdev=865.15, samples=9 00:34:27.643 iops : min= 2102, max= 2362, avg=2249.11, stdev=108.14, samples=9 00:34:27.643 lat (msec) : 2=0.18%, 4=81.72%, 10=18.03%, 50=0.07% 00:34:27.643 cpu : usr=97.28%, sys=2.36%, ctx=12, majf=0, minf=1635 00:34:27.643 IO depths : 1=0.1%, 2=2.1%, 4=69.4%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 issued rwts: total=11286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.643 filename1: (groupid=0, jobs=1): err= 0: pid=1702867: Tue Apr 23 21:35:21 2024 00:34:27.643 read: IOPS=2333, BW=18.2MiB/s (19.1MB/s)(91.2MiB/5001msec) 00:34:27.643 slat (nsec): min=5949, max=77330, avg=10415.05, stdev=4818.19 00:34:27.643 clat (usec): min=1039, max=50265, avg=3400.84, stdev=1390.21 00:34:27.643 lat (usec): min=1050, max=50295, avg=3411.25, stdev=1390.27 00:34:27.643 clat percentiles (usec): 00:34:27.643 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2900], 00:34:27.643 | 30.00th=[ 3032], 40.00th=[ 3163], 50.00th=[ 3261], 60.00th=[ 3392], 00:34:27.643 | 70.00th=[ 3556], 80.00th=[ 3785], 90.00th=[ 4228], 95.00th=[ 4686], 00:34:27.643 | 99.00th=[ 5538], 99.50th=[ 5932], 99.90th=[ 6915], 99.95th=[50070], 00:34:27.643 | 99.99th=[50070] 00:34:27.643 bw ( KiB/s): min=17216, max=20096, per=24.69%, avg=18599.44, stdev=1022.54, samples=9 00:34:27.643 iops : min= 2152, max= 2512, avg=2324.89, stdev=127.85, samples=9 00:34:27.643 lat (msec) : 2=0.23%, 4=85.95%, 10=13.75%, 100=0.07% 00:34:27.643 cpu : usr=97.36%, sys=2.30%, ctx=12, majf=0, minf=1635 00:34:27.643 IO depths : 1=0.1%, 2=1.5%, 4=69.6%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 issued rwts: total=11669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.643 filename1: (groupid=0, jobs=1): err= 0: pid=1702868: Tue Apr 23 21:35:21 2024 00:34:27.643 read: IOPS=2314, BW=18.1MiB/s (19.0MB/s)(90.4MiB/5002msec) 00:34:27.643 slat (nsec): min=3933, max=81583, avg=7785.17, stdev=3468.34 00:34:27.643 clat (usec): min=723, max=8316, avg=3435.95, stdev=695.36 00:34:27.643 lat (usec): min=733, max=8333, avg=3443.74, stdev=695.44 00:34:27.643 clat percentiles (usec): 00:34:27.643 | 1.00th=[ 1614], 5.00th=[ 2507], 10.00th=[ 2737], 20.00th=[ 2966], 00:34:27.643 | 30.00th=[ 3097], 40.00th=[ 3195], 50.00th=[ 3326], 60.00th=[ 3490], 00:34:27.643 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4293], 95.00th=[ 4686], 00:34:27.643 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 7308], 99.95th=[ 8160], 00:34:27.643 | 99.99th=[ 8291] 00:34:27.643 bw ( KiB/s): min=16832, max=21034, per=24.56%, avg=18497.11, stdev=1383.95, samples=9 00:34:27.643 iops : min= 2104, max= 2629, avg=2312.11, stdev=172.94, samples=9 00:34:27.643 lat (usec) : 750=0.01%, 1000=0.02% 00:34:27.643 lat (msec) : 2=1.96%, 4=81.11%, 10=16.91% 00:34:27.643 cpu : usr=97.32%, sys=2.36%, ctx=8, majf=0, minf=1635 00:34:27.643 IO depths : 1=0.1%, 2=1.5%, 4=68.9%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.643 issued rwts: total=11576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:27.643 00:34:27.643 Run status group 0 (all jobs): 00:34:27.643 READ: bw=73.6MiB/s (77.1MB/s), 17.6MiB/s-19.6MiB/s (18.5MB/s-20.6MB/s), io=368MiB (386MB), run=5001-5002msec 00:34:27.643 ----------------------------------------------------- 00:34:27.643 Suppressions used: 00:34:27.643 count bytes template 00:34:27.643 6 52 /usr/src/fio/parse.c 00:34:27.643 1 8 libtcmalloc_minimal.so 00:34:27.643 1 904 libcrypto.so 00:34:27.643 ----------------------------------------------------- 00:34:27.643 00:34:27.643 21:35:21 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:27.643 21:35:21 -- target/dif.sh@43 -- # local sub 00:34:27.643 21:35:21 -- target/dif.sh@45 -- # for sub in "$@" 00:34:27.643 21:35:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:27.643 21:35:21 -- target/dif.sh@36 -- # local sub_id=0 00:34:27.643 21:35:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.643 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.643 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.643 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.643 21:35:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:27.643 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.643 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.643 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.643 21:35:21 -- target/dif.sh@45 -- # for sub in "$@" 00:34:27.643 21:35:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:27.643 21:35:21 -- target/dif.sh@36 -- # local sub_id=1 00:34:27.643 21:35:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.643 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.643 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.643 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.643 21:35:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:27.643 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.643 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.643 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.643 00:34:27.643 real 0m26.049s 00:34:27.643 user 5m20.682s 00:34:27.643 sys 0m6.366s 00:34:27.643 21:35:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:27.643 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.643 ************************************ 00:34:27.643 END TEST fio_dif_rand_params 00:34:27.643 ************************************ 00:34:27.901 21:35:21 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:27.901 21:35:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:27.901 21:35:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:27.901 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.901 ************************************ 00:34:27.901 START TEST fio_dif_digest 00:34:27.901 ************************************ 00:34:27.901 21:35:21 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:34:27.901 21:35:21 -- target/dif.sh@123 -- # local NULL_DIF 00:34:27.901 21:35:21 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:27.902 21:35:21 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:27.902 21:35:21 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:27.902 21:35:21 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:27.902 21:35:21 -- target/dif.sh@127 -- # numjobs=3 00:34:27.902 21:35:21 -- target/dif.sh@127 -- # iodepth=3 00:34:27.902 21:35:21 -- target/dif.sh@127 -- # runtime=10 00:34:27.902 21:35:21 -- target/dif.sh@128 -- # hdgst=true 00:34:27.902 21:35:21 -- target/dif.sh@128 -- # ddgst=true 00:34:27.902 21:35:21 -- target/dif.sh@130 -- # create_subsystems 0 00:34:27.902 21:35:21 -- target/dif.sh@28 -- # local sub 00:34:27.902 21:35:21 -- target/dif.sh@30 -- # for sub in "$@" 00:34:27.902 21:35:21 -- target/dif.sh@31 -- # create_subsystem 0 00:34:27.902 21:35:21 -- target/dif.sh@18 -- # local sub_id=0 00:34:27.902 21:35:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:27.902 21:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.902 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:34:27.902 bdev_null0 00:34:27.902 21:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.902 21:35:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:27.902 21:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.902 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:34:27.902 21:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.902 21:35:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:27.902 21:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.902 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:34:27.902 21:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.902 21:35:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:27.902 21:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:27.902 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:34:27.902 [2024-04-23 21:35:22.021704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.902 21:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:27.902 21:35:22 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:27.902 21:35:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.902 21:35:22 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.902 21:35:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:34:27.902 21:35:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:27.902 21:35:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:34:27.902 21:35:22 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.902 21:35:22 -- common/autotest_common.sh@1327 -- # shift 00:34:27.902 21:35:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:34:27.902 21:35:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.902 21:35:22 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:27.902 21:35:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:27.902 21:35:22 -- target/dif.sh@82 -- # gen_fio_conf 00:34:27.902 21:35:22 -- nvmf/common.sh@521 -- # config=() 00:34:27.902 21:35:22 -- target/dif.sh@54 -- # local file 00:34:27.902 21:35:22 -- nvmf/common.sh@521 -- # local subsystem config 00:34:27.902 21:35:22 -- target/dif.sh@56 -- # cat 00:34:27.902 21:35:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:34:27.902 21:35:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:34:27.902 { 00:34:27.902 "params": { 00:34:27.902 "name": "Nvme$subsystem", 00:34:27.902 "trtype": "$TEST_TRANSPORT", 00:34:27.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:27.902 "adrfam": "ipv4", 00:34:27.902 "trsvcid": "$NVMF_PORT", 00:34:27.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:27.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:27.902 "hdgst": ${hdgst:-false}, 00:34:27.902 "ddgst": ${ddgst:-false} 00:34:27.902 }, 00:34:27.902 "method": "bdev_nvme_attach_controller" 00:34:27.902 } 00:34:27.902 EOF 00:34:27.902 )") 00:34:27.902 21:35:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:34:27.902 21:35:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:34:27.902 21:35:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:34:27.902 21:35:22 -- nvmf/common.sh@543 -- # cat 00:34:27.902 21:35:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:27.902 21:35:22 -- target/dif.sh@72 -- # (( file <= files )) 00:34:27.902 21:35:22 -- nvmf/common.sh@545 -- # jq . 00:34:27.902 21:35:22 -- nvmf/common.sh@546 -- # IFS=, 00:34:27.902 21:35:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:34:27.902 "params": { 00:34:27.902 "name": "Nvme0", 00:34:27.902 "trtype": "tcp", 00:34:27.902 "traddr": "10.0.0.2", 00:34:27.902 "adrfam": "ipv4", 00:34:27.902 "trsvcid": "4420", 00:34:27.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.902 "hdgst": true, 00:34:27.902 "ddgst": true 00:34:27.902 }, 00:34:27.902 "method": "bdev_nvme_attach_controller" 00:34:27.902 }' 00:34:27.902 21:35:22 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:27.902 21:35:22 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:27.902 21:35:22 -- common/autotest_common.sh@1333 -- # break 00:34:27.902 21:35:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:27.902 21:35:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:28.161 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:28.161 ... 00:34:28.161 fio-3.35 00:34:28.161 Starting 3 threads 00:34:28.419 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.711 00:34:40.711 filename0: (groupid=0, jobs=1): err= 0: pid=1704988: Tue Apr 23 21:35:33 2024 00:34:40.711 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(320MiB/10004msec) 00:34:40.711 slat (nsec): min=5695, max=45721, avg=13146.19, stdev=3856.81 00:34:40.711 clat (usec): min=7819, max=22981, avg=11699.16, stdev=1117.33 00:34:40.711 lat (usec): min=7826, max=23009, avg=11712.31, stdev=1117.51 00:34:40.711 clat percentiles (usec): 00:34:40.711 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:34:40.711 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:34:40.711 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13173], 95.00th=[13698], 00:34:40.711 | 99.00th=[14746], 99.50th=[15008], 99.90th=[22938], 99.95th=[22938], 00:34:40.711 | 99.99th=[22938] 00:34:40.711 bw ( KiB/s): min=30720, max=34816, per=34.78%, avg=32714.11, stdev=1029.60, samples=19 00:34:40.711 iops : min= 240, max= 272, avg=255.58, stdev= 8.04, samples=19 00:34:40.711 lat (msec) : 10=3.32%, 20=96.57%, 50=0.12% 00:34:40.711 cpu : usr=96.37%, sys=3.30%, ctx=13, majf=0, minf=1634 00:34:40.711 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 issued rwts: total=2562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.711 filename0: (groupid=0, jobs=1): err= 0: pid=1704990: Tue Apr 23 21:35:33 2024 00:34:40.711 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(292MiB/10046msec) 00:34:40.711 slat (nsec): min=3668, max=39495, avg=10370.54, stdev=3112.97 00:34:40.711 clat (usec): min=9749, max=52581, avg=12860.43, stdev=1686.64 00:34:40.711 lat (usec): min=9760, max=52593, avg=12870.80, stdev=1686.96 00:34:40.711 clat percentiles (usec): 00:34:40.711 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:34:40.711 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:34:40.711 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14615], 95.00th=[15139], 00:34:40.711 | 99.00th=[16450], 99.50th=[16909], 99.90th=[22414], 99.95th=[46924], 00:34:40.711 | 99.99th=[52691] 00:34:40.711 bw ( KiB/s): min=27136, max=31488, per=31.78%, avg=29900.80, stdev=1228.01, samples=20 00:34:40.711 iops : min= 212, max= 246, avg=233.60, stdev= 9.59, samples=20 00:34:40.711 lat (msec) : 10=0.13%, 20=99.66%, 50=0.17%, 100=0.04% 00:34:40.711 cpu : usr=97.28%, sys=2.41%, ctx=15, majf=0, minf=1634 00:34:40.711 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.711 filename0: (groupid=0, jobs=1): err= 0: pid=1704991: Tue Apr 23 21:35:33 2024 00:34:40.711 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(310MiB/10044msec) 00:34:40.711 slat (nsec): min=4296, max=35143, avg=10359.93, stdev=2926.26 00:34:40.711 clat (usec): min=9373, max=52881, avg=12105.80, stdev=1554.27 00:34:40.711 lat (usec): min=9391, max=52889, avg=12116.16, stdev=1554.51 00:34:40.711 clat percentiles (usec): 00:34:40.711 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:34:40.711 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:34:40.711 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[14091], 00:34:40.711 | 99.00th=[15401], 99.50th=[16057], 99.90th=[20579], 99.95th=[46924], 00:34:40.711 | 99.99th=[52691] 00:34:40.711 bw ( KiB/s): min=28672, max=33536, per=33.76%, avg=31759.95, stdev=1224.90, samples=20 00:34:40.711 iops : min= 224, max= 262, avg=248.10, stdev= 9.57, samples=20 00:34:40.711 lat (msec) : 10=1.21%, 20=98.67%, 50=0.08%, 100=0.04% 00:34:40.711 cpu : usr=97.45%, sys=2.24%, ctx=15, majf=0, minf=1635 00:34:40.711 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.711 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.711 00:34:40.711 Run status group 0 (all jobs): 00:34:40.711 READ: bw=91.9MiB/s (96.3MB/s), 29.1MiB/s-32.0MiB/s (30.5MB/s-33.6MB/s), io=923MiB (968MB), run=10004-10046msec 00:34:40.711 ----------------------------------------------------- 00:34:40.711 Suppressions used: 00:34:40.711 count bytes template 00:34:40.711 5 44 /usr/src/fio/parse.c 00:34:40.711 1 8 libtcmalloc_minimal.so 00:34:40.711 1 904 libcrypto.so 00:34:40.711 ----------------------------------------------------- 00:34:40.711 00:34:40.711 21:35:33 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:40.711 21:35:33 -- target/dif.sh@43 -- # local sub 00:34:40.711 21:35:33 -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.712 21:35:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.712 21:35:33 -- target/dif.sh@36 -- # local sub_id=0 00:34:40.712 21:35:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.712 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:40.712 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:34:40.712 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:40.712 21:35:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.712 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:40.712 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:34:40.712 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:40.712 00:34:40.712 real 0m11.761s 00:34:40.712 user 0m46.823s 00:34:40.712 sys 0m1.199s 00:34:40.712 21:35:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:40.712 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:34:40.712 ************************************ 00:34:40.712 END TEST fio_dif_digest 00:34:40.712 ************************************ 00:34:40.712 21:35:33 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:40.712 21:35:33 -- target/dif.sh@147 -- # nvmftestfini 00:34:40.712 21:35:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:40.712 21:35:33 -- nvmf/common.sh@117 -- # sync 00:34:40.712 21:35:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:40.712 21:35:33 -- nvmf/common.sh@120 -- # set +e 00:34:40.712 21:35:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:40.712 21:35:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:40.712 rmmod nvme_tcp 00:34:40.712 rmmod nvme_fabrics 00:34:40.712 rmmod nvme_keyring 00:34:40.712 21:35:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:40.712 21:35:33 -- nvmf/common.sh@124 -- # set -e 00:34:40.712 21:35:33 -- nvmf/common.sh@125 -- # return 0 00:34:40.712 21:35:33 -- nvmf/common.sh@478 -- # '[' -n 1691465 ']' 00:34:40.712 21:35:33 -- nvmf/common.sh@479 -- # killprocess 1691465 00:34:40.712 21:35:33 -- common/autotest_common.sh@936 -- # '[' -z 1691465 ']' 00:34:40.712 21:35:33 -- common/autotest_common.sh@940 -- # kill -0 1691465 00:34:40.712 21:35:33 -- common/autotest_common.sh@941 -- # uname 00:34:40.712 21:35:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:40.712 21:35:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1691465 00:34:40.712 21:35:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:40.712 21:35:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:40.712 21:35:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1691465' 00:34:40.712 killing process with pid 1691465 00:34:40.712 21:35:33 -- common/autotest_common.sh@955 -- # kill 1691465 00:34:40.712 21:35:33 -- common/autotest_common.sh@960 -- # wait 1691465 00:34:40.712 21:35:34 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:34:40.712 21:35:34 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:34:42.610 Waiting for block devices as requested 00:34:42.610 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:34:42.610 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:42.868 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:42.868 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:42.868 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:34:42.868 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:43.125 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.125 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:43.125 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.125 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:43.125 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.383 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.383 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.383 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:43.383 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.642 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:34:43.642 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:34:43.642 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:34:43.902 21:35:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:43.902 21:35:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:43.902 21:35:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:43.902 21:35:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:43.902 21:35:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.902 21:35:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.902 21:35:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.837 21:35:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:46.097 00:34:46.097 real 1m17.663s 00:34:46.097 user 8m14.277s 00:34:46.097 sys 0m18.891s 00:34:46.097 21:35:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:46.097 21:35:40 -- common/autotest_common.sh@10 -- # set +x 00:34:46.097 ************************************ 00:34:46.097 END TEST nvmf_dif 00:34:46.097 ************************************ 00:34:46.097 21:35:40 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:46.097 21:35:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:46.097 21:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:46.097 21:35:40 -- common/autotest_common.sh@10 -- # set +x 00:34:46.097 ************************************ 00:34:46.097 START TEST nvmf_abort_qd_sizes 00:34:46.097 ************************************ 00:34:46.097 21:35:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:46.097 * Looking for test storage... 00:34:46.097 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:34:46.097 21:35:40 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:34:46.097 21:35:40 -- nvmf/common.sh@7 -- # uname -s 00:34:46.097 21:35:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:46.097 21:35:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:46.097 21:35:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:46.097 21:35:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:46.097 21:35:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:46.097 21:35:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:46.097 21:35:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:46.097 21:35:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:46.097 21:35:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:46.097 21:35:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:46.097 21:35:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:46.097 21:35:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:34:46.097 21:35:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:46.097 21:35:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:46.097 21:35:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:46.097 21:35:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:46.097 21:35:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:34:46.097 21:35:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:46.097 21:35:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:46.097 21:35:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:46.097 21:35:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.097 21:35:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.097 21:35:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.097 21:35:40 -- paths/export.sh@5 -- # export PATH 00:34:46.097 21:35:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:46.097 21:35:40 -- nvmf/common.sh@47 -- # : 0 00:34:46.097 21:35:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:46.097 21:35:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:46.097 21:35:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:46.097 21:35:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:46.097 21:35:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:46.097 21:35:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:46.097 21:35:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:46.097 21:35:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:46.097 21:35:40 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:46.097 21:35:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:46.098 21:35:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:46.098 21:35:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:46.098 21:35:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:46.098 21:35:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:46.098 21:35:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.098 21:35:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:46.098 21:35:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:46.098 21:35:40 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:34:46.098 21:35:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:34:46.098 21:35:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:46.098 21:35:40 -- common/autotest_common.sh@10 -- # set +x 00:34:52.663 21:35:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:52.663 21:35:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:52.663 21:35:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:52.663 21:35:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:52.663 21:35:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:52.663 21:35:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:52.663 21:35:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:52.663 21:35:45 -- nvmf/common.sh@295 -- # net_devs=() 00:34:52.663 21:35:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:52.663 21:35:45 -- nvmf/common.sh@296 -- # e810=() 00:34:52.663 21:35:45 -- nvmf/common.sh@296 -- # local -ga e810 00:34:52.663 21:35:45 -- nvmf/common.sh@297 -- # x722=() 00:34:52.663 21:35:45 -- nvmf/common.sh@297 -- # local -ga x722 00:34:52.663 21:35:45 -- nvmf/common.sh@298 -- # mlx=() 00:34:52.663 21:35:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:52.663 21:35:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.663 21:35:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:52.663 21:35:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.663 21:35:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:34:52.663 Found 0000:27:00.0 (0x8086 - 0x159b) 00:34:52.663 21:35:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.663 21:35:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:34:52.663 Found 0000:27:00.1 (0x8086 - 0x159b) 00:34:52.663 21:35:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.663 21:35:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.663 21:35:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.663 21:35:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:34:52.663 Found net devices under 0000:27:00.0: cvl_0_0 00:34:52.663 21:35:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.663 21:35:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.663 21:35:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.663 21:35:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.663 21:35:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:34:52.663 Found net devices under 0000:27:00.1: cvl_0_1 00:34:52.663 21:35:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.663 21:35:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:34:52.663 21:35:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:34:52.663 21:35:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:34:52.663 21:35:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.663 21:35:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.663 21:35:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.663 21:35:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:52.663 21:35:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.663 21:35:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.663 21:35:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:52.663 21:35:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.663 21:35:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.663 21:35:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:52.663 21:35:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:52.663 21:35:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.663 21:35:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.663 21:35:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.663 21:35:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.663 21:35:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:52.663 21:35:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.663 21:35:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.663 21:35:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.663 21:35:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:52.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:34:52.663 00:34:52.663 --- 10.0.0.2 ping statistics --- 00:34:52.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.663 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:34:52.663 21:35:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:34:52.663 00:34:52.663 --- 10.0.0.1 ping statistics --- 00:34:52.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.663 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:34:52.663 21:35:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.663 21:35:46 -- nvmf/common.sh@411 -- # return 0 00:34:52.663 21:35:46 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:34:52.663 21:35:46 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:34:54.572 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.572 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.572 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.572 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.572 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.572 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.572 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.830 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.830 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:34:54.830 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:34:54.830 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:34:55.401 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:34:55.661 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:34:55.661 21:35:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.661 21:35:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:55.661 21:35:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:55.661 21:35:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.662 21:35:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:55.662 21:35:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:55.662 21:35:49 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:55.662 21:35:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:55.662 21:35:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:55.662 21:35:49 -- common/autotest_common.sh@10 -- # set +x 00:34:55.662 21:35:49 -- nvmf/common.sh@470 -- # nvmfpid=1718297 00:34:55.662 21:35:49 -- nvmf/common.sh@471 -- # waitforlisten 1718297 00:34:55.662 21:35:49 -- common/autotest_common.sh@817 -- # '[' -z 1718297 ']' 00:34:55.662 21:35:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.662 21:35:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:55.662 21:35:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.662 21:35:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:55.662 21:35:49 -- common/autotest_common.sh@10 -- # set +x 00:34:55.662 21:35:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:55.920 [2024-04-23 21:35:50.003963] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:34:55.920 [2024-04-23 21:35:50.004069] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.920 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.920 [2024-04-23 21:35:50.136235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.179 [2024-04-23 21:35:50.234962] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.179 [2024-04-23 21:35:50.235008] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.179 [2024-04-23 21:35:50.235022] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.179 [2024-04-23 21:35:50.235032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.179 [2024-04-23 21:35:50.235040] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.179 [2024-04-23 21:35:50.235117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.179 [2024-04-23 21:35:50.235145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.179 [2024-04-23 21:35:50.235179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.179 [2024-04-23 21:35:50.235165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.745 21:35:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:56.745 21:35:50 -- common/autotest_common.sh@850 -- # return 0 00:34:56.745 21:35:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:56.745 21:35:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:56.745 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:34:56.745 21:35:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.745 21:35:50 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:56.745 21:35:50 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:56.745 21:35:50 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:56.745 21:35:50 -- scripts/common.sh@309 -- # local bdf bdfs 00:34:56.746 21:35:50 -- scripts/common.sh@310 -- # local nvmes 00:34:56.746 21:35:50 -- scripts/common.sh@312 -- # [[ -n 0000:03:00.0 0000:c9:00.0 ]] 00:34:56.746 21:35:50 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:56.746 21:35:50 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:56.746 21:35:50 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:03:00.0 ]] 00:34:56.746 21:35:50 -- scripts/common.sh@320 -- # uname -s 00:34:56.746 21:35:50 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:56.746 21:35:50 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:56.746 21:35:50 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:34:56.746 21:35:50 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:34:56.746 21:35:50 -- scripts/common.sh@320 -- # uname -s 00:34:56.746 21:35:50 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:34:56.746 21:35:50 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:34:56.746 21:35:50 -- scripts/common.sh@325 -- # (( 2 )) 00:34:56.746 21:35:50 -- scripts/common.sh@326 -- # printf '%s\n' 0000:03:00.0 0000:c9:00.0 00:34:56.746 21:35:50 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:34:56.746 21:35:50 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:03:00.0 00:34:56.746 21:35:50 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:56.746 21:35:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:56.746 21:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:56.746 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:34:56.746 ************************************ 00:34:56.746 START TEST spdk_target_abort 00:34:56.746 ************************************ 00:34:56.746 21:35:50 -- common/autotest_common.sh@1111 -- # spdk_target 00:34:56.746 21:35:50 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:56.746 21:35:50 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:03:00.0 -b spdk_target 00:34:56.746 21:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:56.746 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:34:57.004 spdk_targetn1 00:34:57.004 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.004 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:57.004 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:34:57.004 [2024-04-23 21:35:51.232075] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.004 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:57.004 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:57.004 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:34:57.004 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:57.004 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:57.004 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:34:57.004 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:57.004 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:57.004 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:34:57.004 [2024-04-23 21:35:51.260269] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.004 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:57.004 21:35:51 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:57.263 EAL: No free 2048 kB hugepages reported on node 1 00:35:00.556 Initializing NVMe Controllers 00:35:00.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:00.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:00.556 Initialization complete. Launching workers. 00:35:00.556 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12808, failed: 0 00:35:00.556 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1406, failed to submit 11402 00:35:00.556 success 872, unsuccess 534, failed 0 00:35:00.556 21:35:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:00.556 21:35:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:00.556 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.841 Initializing NVMe Controllers 00:35:03.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:03.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:03.841 Initialization complete. Launching workers. 00:35:03.841 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8945, failed: 0 00:35:03.841 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7672 00:35:03.841 success 291, unsuccess 982, failed 0 00:35:03.841 21:35:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:03.841 21:35:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:03.841 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.130 Initializing NVMe Controllers 00:35:07.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:07.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:07.130 Initialization complete. Launching workers. 00:35:07.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38565, failed: 0 00:35:07.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2610, failed to submit 35955 00:35:07.130 success 578, unsuccess 2032, failed 0 00:35:07.130 21:36:01 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:07.130 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:07.130 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:35:07.130 21:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:07.130 21:36:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:07.130 21:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:07.130 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:35:07.703 21:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:07.703 21:36:01 -- target/abort_qd_sizes.sh@61 -- # killprocess 1718297 00:35:07.703 21:36:01 -- common/autotest_common.sh@936 -- # '[' -z 1718297 ']' 00:35:07.703 21:36:01 -- common/autotest_common.sh@940 -- # kill -0 1718297 00:35:07.703 21:36:01 -- common/autotest_common.sh@941 -- # uname 00:35:07.703 21:36:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:07.703 21:36:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1718297 00:35:07.961 21:36:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:07.961 21:36:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:07.961 21:36:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1718297' 00:35:07.961 killing process with pid 1718297 00:35:07.961 21:36:01 -- common/autotest_common.sh@955 -- # kill 1718297 00:35:07.961 21:36:01 -- common/autotest_common.sh@960 -- # wait 1718297 00:35:08.220 00:35:08.220 real 0m11.522s 00:35:08.220 user 0m45.943s 00:35:08.220 sys 0m2.127s 00:35:08.220 21:36:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:08.220 21:36:02 -- common/autotest_common.sh@10 -- # set +x 00:35:08.220 ************************************ 00:35:08.220 END TEST spdk_target_abort 00:35:08.220 ************************************ 00:35:08.220 21:36:02 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:08.220 21:36:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:08.220 21:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:08.220 21:36:02 -- common/autotest_common.sh@10 -- # set +x 00:35:08.220 ************************************ 00:35:08.220 START TEST kernel_target_abort 00:35:08.220 ************************************ 00:35:08.220 21:36:02 -- common/autotest_common.sh@1111 -- # kernel_target 00:35:08.220 21:36:02 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:08.220 21:36:02 -- nvmf/common.sh@717 -- # local ip 00:35:08.220 21:36:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:35:08.220 21:36:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:35:08.220 21:36:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.220 21:36:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.220 21:36:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:35:08.220 21:36:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.220 21:36:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:35:08.220 21:36:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:35:08.220 21:36:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:35:08.220 21:36:02 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:08.220 21:36:02 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:08.220 21:36:02 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:35:08.220 21:36:02 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:08.220 21:36:02 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:08.220 21:36:02 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:08.220 21:36:02 -- nvmf/common.sh@628 -- # local block nvme 00:35:08.220 21:36:02 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:35:08.220 21:36:02 -- nvmf/common.sh@631 -- # modprobe nvmet 00:35:08.479 21:36:02 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:08.479 21:36:02 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:35:11.018 Waiting for block devices as requested 00:35:11.018 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:35:11.018 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.018 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.276 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.276 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.276 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.276 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.276 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.535 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.535 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.535 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.535 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.794 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.794 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:11.794 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:35:11.794 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:12.083 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:35:12.083 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:35:12.653 21:36:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:12.653 21:36:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:12.653 21:36:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:35:12.653 21:36:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:12.653 21:36:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:12.653 21:36:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:12.653 21:36:06 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:35:12.653 21:36:06 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:12.653 21:36:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:12.913 No valid GPT data, bailing 00:35:12.913 21:36:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:12.913 21:36:06 -- scripts/common.sh@391 -- # pt= 00:35:12.913 21:36:06 -- scripts/common.sh@392 -- # return 1 00:35:12.913 21:36:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:35:12.913 21:36:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:35:12.913 21:36:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:12.913 21:36:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:35:12.913 21:36:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:35:12.913 21:36:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:12.913 21:36:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:12.913 21:36:06 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:35:12.913 21:36:06 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:35:12.913 21:36:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:35:12.913 No valid GPT data, bailing 00:35:12.913 21:36:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:12.913 21:36:06 -- scripts/common.sh@391 -- # pt= 00:35:12.913 21:36:06 -- scripts/common.sh@392 -- # return 1 00:35:12.913 21:36:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:35:12.913 21:36:06 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:35:12.913 21:36:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.913 21:36:06 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.913 21:36:06 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:12.913 21:36:06 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:12.913 21:36:06 -- nvmf/common.sh@656 -- # echo 1 00:35:12.913 21:36:06 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:35:12.913 21:36:06 -- nvmf/common.sh@658 -- # echo 1 00:35:12.913 21:36:06 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:35:12.913 21:36:06 -- nvmf/common.sh@661 -- # echo tcp 00:35:12.913 21:36:06 -- nvmf/common.sh@662 -- # echo 4420 00:35:12.913 21:36:06 -- nvmf/common.sh@663 -- # echo ipv4 00:35:12.913 21:36:07 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:12.913 21:36:07 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 --hostid=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 -a 10.0.0.1 -t tcp -s 4420 00:35:12.913 00:35:12.913 Discovery Log Number of Records 2, Generation counter 2 00:35:12.913 =====Discovery Log Entry 0====== 00:35:12.913 trtype: tcp 00:35:12.913 adrfam: ipv4 00:35:12.913 subtype: current discovery subsystem 00:35:12.913 treq: not specified, sq flow control disable supported 00:35:12.913 portid: 1 00:35:12.913 trsvcid: 4420 00:35:12.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:12.913 traddr: 10.0.0.1 00:35:12.913 eflags: none 00:35:12.913 sectype: none 00:35:12.913 =====Discovery Log Entry 1====== 00:35:12.913 trtype: tcp 00:35:12.913 adrfam: ipv4 00:35:12.913 subtype: nvme subsystem 00:35:12.913 treq: not specified, sq flow control disable supported 00:35:12.913 portid: 1 00:35:12.913 trsvcid: 4420 00:35:12.913 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:12.913 traddr: 10.0.0.1 00:35:12.913 eflags: none 00:35:12.913 sectype: none 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.913 21:36:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.913 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.204 Initializing NVMe Controllers 00:35:16.204 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:16.204 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:16.204 Initialization complete. Launching workers. 00:35:16.204 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 46532, failed: 0 00:35:16.204 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 46532, failed to submit 0 00:35:16.204 success 0, unsuccess 46532, failed 0 00:35:16.204 21:36:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:16.204 21:36:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:16.204 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.490 Initializing NVMe Controllers 00:35:19.490 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.490 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:19.490 Initialization complete. Launching workers. 00:35:19.490 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90914, failed: 0 00:35:19.490 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23014, failed to submit 67900 00:35:19.490 success 0, unsuccess 23014, failed 0 00:35:19.490 21:36:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:19.490 21:36:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.490 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.134 Initializing NVMe Controllers 00:35:22.134 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.134 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:22.134 Initialization complete. Launching workers. 00:35:22.134 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85114, failed: 0 00:35:22.134 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21254, failed to submit 63860 00:35:22.134 success 0, unsuccess 21254, failed 0 00:35:22.134 21:36:16 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:22.134 21:36:16 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:22.134 21:36:16 -- nvmf/common.sh@675 -- # echo 0 00:35:22.134 21:36:16 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.134 21:36:16 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:22.134 21:36:16 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:22.134 21:36:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.134 21:36:16 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:35:22.134 21:36:16 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:35:22.134 21:36:16 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:35:24.674 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.674 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.674 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.933 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:35:24.933 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.933 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:35:24.933 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.933 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:35:24.933 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:24.933 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:35:25.193 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:35:25.193 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:35:25.193 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:25.193 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:35:25.193 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:35:25.193 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:35:25.765 0000:c9:00.0 (144d a80a): nvme -> vfio-pci 00:35:26.024 0000:03:00.0 (1344 51c3): nvme -> vfio-pci 00:35:26.024 00:35:26.024 real 0m17.809s 00:35:26.024 user 0m5.888s 00:35:26.024 sys 0m5.450s 00:35:26.024 21:36:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:26.024 21:36:20 -- common/autotest_common.sh@10 -- # set +x 00:35:26.024 ************************************ 00:35:26.024 END TEST kernel_target_abort 00:35:26.024 ************************************ 00:35:26.282 21:36:20 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:26.282 21:36:20 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:26.282 21:36:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:35:26.282 21:36:20 -- nvmf/common.sh@117 -- # sync 00:35:26.282 21:36:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:26.282 21:36:20 -- nvmf/common.sh@120 -- # set +e 00:35:26.282 21:36:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:26.282 21:36:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:26.282 rmmod nvme_tcp 00:35:26.282 rmmod nvme_fabrics 00:35:26.282 rmmod nvme_keyring 00:35:26.282 21:36:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:26.282 21:36:20 -- nvmf/common.sh@124 -- # set -e 00:35:26.282 21:36:20 -- nvmf/common.sh@125 -- # return 0 00:35:26.282 21:36:20 -- nvmf/common.sh@478 -- # '[' -n 1718297 ']' 00:35:26.282 21:36:20 -- nvmf/common.sh@479 -- # killprocess 1718297 00:35:26.282 21:36:20 -- common/autotest_common.sh@936 -- # '[' -z 1718297 ']' 00:35:26.282 21:36:20 -- common/autotest_common.sh@940 -- # kill -0 1718297 00:35:26.282 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1718297) - No such process 00:35:26.282 21:36:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1718297 is not found' 00:35:26.282 Process with pid 1718297 is not found 00:35:26.282 21:36:20 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:35:26.282 21:36:20 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:35:28.818 Waiting for block devices as requested 00:35:28.818 0000:c9:00.0 (144d a80a): vfio-pci -> nvme 00:35:28.818 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:28.818 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.076 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.076 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.076 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.076 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.335 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.335 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.335 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.335 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.335 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.594 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.594 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.594 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.594 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:35:29.852 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:35:29.852 0000:03:00.0 (1344 51c3): vfio-pci -> nvme 00:35:30.112 21:36:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:35:30.112 21:36:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:35:30.112 21:36:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:30.112 21:36:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:30.112 21:36:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.112 21:36:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.112 21:36:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.646 21:36:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:32.646 00:35:32.646 real 0m46.044s 00:35:32.646 user 0m55.487s 00:35:32.646 sys 0m15.609s 00:35:32.646 21:36:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:32.646 21:36:26 -- common/autotest_common.sh@10 -- # set +x 00:35:32.646 ************************************ 00:35:32.646 END TEST nvmf_abort_qd_sizes 00:35:32.646 ************************************ 00:35:32.646 21:36:26 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:35:32.646 21:36:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:32.646 21:36:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:32.646 21:36:26 -- common/autotest_common.sh@10 -- # set +x 00:35:32.646 ************************************ 00:35:32.646 START TEST keyring_file 00:35:32.646 ************************************ 00:35:32.646 21:36:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:35:32.646 * Looking for test storage... 00:35:32.646 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:35:32.646 21:36:26 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:35:32.646 21:36:26 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.646 21:36:26 -- nvmf/common.sh@7 -- # uname -s 00:35:32.646 21:36:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.646 21:36:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.646 21:36:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.646 21:36:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.646 21:36:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.646 21:36:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.646 21:36:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.646 21:36:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.646 21:36:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.646 21:36:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.646 21:36:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:35:32.646 21:36:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00f1cc9e-19ae-ed11-906e-a4bf01948ec3 00:35:32.646 21:36:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.646 21:36:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.646 21:36:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:35:32.646 21:36:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.646 21:36:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:35:32.646 21:36:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.646 21:36:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.646 21:36:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.646 21:36:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.646 21:36:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.646 21:36:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.646 21:36:26 -- paths/export.sh@5 -- # export PATH 00:35:32.646 21:36:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.646 21:36:26 -- nvmf/common.sh@47 -- # : 0 00:35:32.646 21:36:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:32.646 21:36:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:32.646 21:36:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.646 21:36:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.646 21:36:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.646 21:36:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:32.646 21:36:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:32.646 21:36:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:32.646 21:36:26 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:32.646 21:36:26 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:32.646 21:36:26 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:32.646 21:36:26 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:32.646 21:36:26 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:32.646 21:36:26 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:32.647 21:36:26 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:32.647 21:36:26 -- keyring/common.sh@15 -- # local name key digest path 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # name=key0 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # digest=0 00:35:32.647 21:36:26 -- keyring/common.sh@18 -- # mktemp 00:35:32.647 21:36:26 -- keyring/common.sh@18 -- # path=/tmp/tmp.9ZopVCqTru 00:35:32.647 21:36:26 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:32.647 21:36:26 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:32.647 21:36:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # digest=0 00:35:32.647 21:36:26 -- nvmf/common.sh@694 -- # python - 00:35:32.647 21:36:26 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9ZopVCqTru 00:35:32.647 21:36:26 -- keyring/common.sh@23 -- # echo /tmp/tmp.9ZopVCqTru 00:35:32.647 21:36:26 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9ZopVCqTru 00:35:32.647 21:36:26 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:32.647 21:36:26 -- keyring/common.sh@15 -- # local name key digest path 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # name=key1 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:32.647 21:36:26 -- keyring/common.sh@17 -- # digest=0 00:35:32.647 21:36:26 -- keyring/common.sh@18 -- # mktemp 00:35:32.647 21:36:26 -- keyring/common.sh@18 -- # path=/tmp/tmp.vVKsuNleJ9 00:35:32.647 21:36:26 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:32.647 21:36:26 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:32.647 21:36:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:35:32.647 21:36:26 -- nvmf/common.sh@693 -- # digest=0 00:35:32.647 21:36:26 -- nvmf/common.sh@694 -- # python - 00:35:32.647 21:36:26 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vVKsuNleJ9 00:35:32.647 21:36:26 -- keyring/common.sh@23 -- # echo /tmp/tmp.vVKsuNleJ9 00:35:32.647 21:36:26 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vVKsuNleJ9 00:35:32.647 21:36:26 -- keyring/file.sh@30 -- # tgtpid=1735735 00:35:32.647 21:36:26 -- keyring/file.sh@32 -- # waitforlisten 1735735 00:35:32.647 21:36:26 -- common/autotest_common.sh@817 -- # '[' -z 1735735 ']' 00:35:32.647 21:36:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.647 21:36:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:32.647 21:36:26 -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:35:32.647 21:36:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.647 21:36:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:32.647 21:36:26 -- common/autotest_common.sh@10 -- # set +x 00:35:32.647 [2024-04-23 21:36:26.743756] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:35:32.647 [2024-04-23 21:36:26.743894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735735 ] 00:35:32.647 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.647 [2024-04-23 21:36:26.870602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.908 [2024-04-23 21:36:26.971089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.169 21:36:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:33.169 21:36:27 -- common/autotest_common.sh@850 -- # return 0 00:35:33.169 21:36:27 -- keyring/file.sh@33 -- # rpc_cmd 00:35:33.169 21:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:33.169 21:36:27 -- common/autotest_common.sh@10 -- # set +x 00:35:33.169 [2024-04-23 21:36:27.433749] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.430 null0 00:35:33.430 [2024-04-23 21:36:27.465745] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:33.430 [2024-04-23 21:36:27.466056] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:33.430 [2024-04-23 21:36:27.473734] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:33.430 21:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:33.430 21:36:27 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:33.430 21:36:27 -- common/autotest_common.sh@638 -- # local es=0 00:35:33.430 21:36:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:33.430 21:36:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:35:33.430 21:36:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:33.430 21:36:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:35:33.430 21:36:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:33.430 21:36:27 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:33.430 21:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:33.430 21:36:27 -- common/autotest_common.sh@10 -- # set +x 00:35:33.430 [2024-04-23 21:36:27.489737] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:35:33.430 { 00:35:33.430 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.430 "secure_channel": false, 00:35:33.430 "listen_address": { 00:35:33.430 "trtype": "tcp", 00:35:33.430 "traddr": "127.0.0.1", 00:35:33.430 "trsvcid": "4420" 00:35:33.430 }, 00:35:33.430 "method": "nvmf_subsystem_add_listener", 00:35:33.430 "req_id": 1 00:35:33.430 } 00:35:33.430 Got JSON-RPC error response 00:35:33.430 response: 00:35:33.430 { 00:35:33.430 "code": -32602, 00:35:33.430 "message": "Invalid parameters" 00:35:33.430 } 00:35:33.430 21:36:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:35:33.430 21:36:27 -- common/autotest_common.sh@641 -- # es=1 00:35:33.430 21:36:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:33.430 21:36:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:33.430 21:36:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:33.430 21:36:27 -- keyring/file.sh@46 -- # bperfpid=1735962 00:35:33.430 21:36:27 -- keyring/file.sh@48 -- # waitforlisten 1735962 /var/tmp/bperf.sock 00:35:33.430 21:36:27 -- common/autotest_common.sh@817 -- # '[' -z 1735962 ']' 00:35:33.430 21:36:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:33.430 21:36:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:33.430 21:36:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:33.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:33.430 21:36:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:33.430 21:36:27 -- common/autotest_common.sh@10 -- # set +x 00:35:33.430 21:36:27 -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:33.430 [2024-04-23 21:36:27.583725] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:35:33.430 [2024-04-23 21:36:27.583867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735962 ] 00:35:33.430 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.690 [2024-04-23 21:36:27.717358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.690 [2024-04-23 21:36:27.807811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.258 21:36:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:34.258 21:36:28 -- common/autotest_common.sh@850 -- # return 0 00:35:34.258 21:36:28 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:34.258 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:34.258 21:36:28 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vVKsuNleJ9 00:35:34.258 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vVKsuNleJ9 00:35:34.258 21:36:28 -- keyring/file.sh@51 -- # get_key key0 00:35:34.517 21:36:28 -- keyring/file.sh@51 -- # jq -r .path 00:35:34.517 21:36:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.517 21:36:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.517 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.517 21:36:28 -- keyring/file.sh@51 -- # [[ /tmp/tmp.9ZopVCqTru == \/\t\m\p\/\t\m\p\.\9\Z\o\p\V\C\q\T\r\u ]] 00:35:34.517 21:36:28 -- keyring/file.sh@52 -- # jq -r .path 00:35:34.517 21:36:28 -- keyring/file.sh@52 -- # get_key key1 00:35:34.517 21:36:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.517 21:36:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.517 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.778 21:36:28 -- keyring/file.sh@52 -- # [[ /tmp/tmp.vVKsuNleJ9 == \/\t\m\p\/\t\m\p\.\v\V\K\s\u\N\l\e\J\9 ]] 00:35:34.778 21:36:28 -- keyring/file.sh@53 -- # get_refcnt key0 00:35:34.778 21:36:28 -- keyring/common.sh@12 -- # get_key key0 00:35:34.778 21:36:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.778 21:36:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.778 21:36:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:34.778 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:34.778 21:36:28 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:35:34.778 21:36:28 -- keyring/file.sh@54 -- # get_refcnt key1 00:35:34.778 21:36:28 -- keyring/common.sh@12 -- # get_key key1 00:35:34.778 21:36:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:34.778 21:36:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:34.778 21:36:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:34.778 21:36:28 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.038 21:36:29 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:35.038 21:36:29 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:35.038 21:36:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:35.038 [2024-04-23 21:36:29.216164] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:35.038 nvme0n1 00:35:35.038 21:36:29 -- keyring/file.sh@59 -- # get_refcnt key0 00:35:35.038 21:36:29 -- keyring/common.sh@12 -- # get_key key0 00:35:35.038 21:36:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:35.038 21:36:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.039 21:36:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.039 21:36:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:35.299 21:36:29 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:35:35.299 21:36:29 -- keyring/file.sh@60 -- # get_refcnt key1 00:35:35.299 21:36:29 -- keyring/common.sh@12 -- # get_key key1 00:35:35.299 21:36:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:35.299 21:36:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:35.299 21:36:29 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:35.299 21:36:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:35.559 21:36:29 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:35:35.559 21:36:29 -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:35.559 Running I/O for 1 seconds... 00:35:36.496 00:35:36.496 Latency(us) 00:35:36.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.496 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:36.496 nvme0n1 : 1.02 6377.01 24.91 0.00 0.00 19893.97 8657.65 102098.19 00:35:36.496 =================================================================================================================== 00:35:36.496 Total : 6377.01 24.91 0.00 0.00 19893.97 8657.65 102098.19 00:35:36.496 0 00:35:36.496 21:36:30 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:36.496 21:36:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:36.755 21:36:30 -- keyring/file.sh@65 -- # get_refcnt key0 00:35:36.755 21:36:30 -- keyring/common.sh@12 -- # get_key key0 00:35:36.755 21:36:30 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.755 21:36:30 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.755 21:36:30 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:36.755 21:36:30 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:36.755 21:36:31 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:35:36.755 21:36:31 -- keyring/file.sh@66 -- # get_refcnt key1 00:35:36.755 21:36:31 -- keyring/common.sh@12 -- # get_key key1 00:35:36.755 21:36:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:36.755 21:36:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:36.755 21:36:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:36.755 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.015 21:36:31 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:37.015 21:36:31 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:37.015 21:36:31 -- common/autotest_common.sh@638 -- # local es=0 00:35:37.015 21:36:31 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:37.015 21:36:31 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:37.015 21:36:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.015 21:36:31 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:37.015 21:36:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.015 21:36:31 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:37.015 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:37.015 [2024-04-23 21:36:31.282175] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:37.015 [2024-04-23 21:36:31.282392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (107): Transport endpoint is not connected 00:35:37.015 [2024-04-23 21:36:31.283370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (9): Bad file descriptor 00:35:37.015 [2024-04-23 21:36:31.284367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:37.015 [2024-04-23 21:36:31.284382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:37.015 [2024-04-23 21:36:31.284392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:37.015 request: 00:35:37.015 { 00:35:37.015 "name": "nvme0", 00:35:37.015 "trtype": "tcp", 00:35:37.015 "traddr": "127.0.0.1", 00:35:37.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.015 "adrfam": "ipv4", 00:35:37.015 "trsvcid": "4420", 00:35:37.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.015 "psk": "key1", 00:35:37.015 "method": "bdev_nvme_attach_controller", 00:35:37.015 "req_id": 1 00:35:37.015 } 00:35:37.015 Got JSON-RPC error response 00:35:37.015 response: 00:35:37.015 { 00:35:37.015 "code": -32602, 00:35:37.015 "message": "Invalid parameters" 00:35:37.015 } 00:35:37.276 21:36:31 -- common/autotest_common.sh@641 -- # es=1 00:35:37.276 21:36:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:37.276 21:36:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:37.276 21:36:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:37.276 21:36:31 -- keyring/file.sh@71 -- # get_refcnt key0 00:35:37.276 21:36:31 -- keyring/common.sh@12 -- # get_key key0 00:35:37.276 21:36:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.276 21:36:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:37.276 21:36:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.276 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.276 21:36:31 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:35:37.276 21:36:31 -- keyring/file.sh@72 -- # get_refcnt key1 00:35:37.276 21:36:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:37.276 21:36:31 -- keyring/common.sh@12 -- # get_key key1 00:35:37.276 21:36:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:37.276 21:36:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:37.277 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.538 21:36:31 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:37.538 21:36:31 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:35:37.538 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:37.538 21:36:31 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:35:37.538 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:37.799 21:36:31 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:35:37.800 21:36:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:37.800 21:36:31 -- keyring/file.sh@77 -- # jq length 00:35:37.800 21:36:32 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:35:37.800 21:36:32 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.9ZopVCqTru 00:35:37.800 21:36:32 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:37.800 21:36:32 -- common/autotest_common.sh@638 -- # local es=0 00:35:37.800 21:36:32 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:37.800 21:36:32 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:37.800 21:36:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.800 21:36:32 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:37.800 21:36:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:37.800 21:36:32 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:37.800 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:38.059 [2024-04-23 21:36:32.171329] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9ZopVCqTru': 0100660 00:35:38.059 [2024-04-23 21:36:32.171369] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:38.059 request: 00:35:38.059 { 00:35:38.059 "name": "key0", 00:35:38.059 "path": "/tmp/tmp.9ZopVCqTru", 00:35:38.059 "method": "keyring_file_add_key", 00:35:38.059 "req_id": 1 00:35:38.059 } 00:35:38.059 Got JSON-RPC error response 00:35:38.059 response: 00:35:38.059 { 00:35:38.059 "code": -1, 00:35:38.059 "message": "Operation not permitted" 00:35:38.059 } 00:35:38.059 21:36:32 -- common/autotest_common.sh@641 -- # es=1 00:35:38.059 21:36:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:38.059 21:36:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:38.059 21:36:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:38.059 21:36:32 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.9ZopVCqTru 00:35:38.059 21:36:32 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:38.059 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9ZopVCqTru 00:35:38.317 21:36:32 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.9ZopVCqTru 00:35:38.317 21:36:32 -- keyring/file.sh@88 -- # get_refcnt key0 00:35:38.317 21:36:32 -- keyring/common.sh@12 -- # get_key key0 00:35:38.317 21:36:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:38.317 21:36:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:38.317 21:36:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:38.317 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:38.317 21:36:32 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:35:38.317 21:36:32 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.317 21:36:32 -- common/autotest_common.sh@638 -- # local es=0 00:35:38.317 21:36:32 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.317 21:36:32 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:35:38.317 21:36:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:38.317 21:36:32 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:35:38.317 21:36:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:35:38.317 21:36:32 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.317 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.576 [2024-04-23 21:36:32.615437] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9ZopVCqTru': No such file or directory 00:35:38.576 [2024-04-23 21:36:32.615465] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:38.576 [2024-04-23 21:36:32.615491] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:38.576 [2024-04-23 21:36:32.615502] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:38.576 [2024-04-23 21:36:32.615511] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:38.576 request: 00:35:38.576 { 00:35:38.576 "name": "nvme0", 00:35:38.576 "trtype": "tcp", 00:35:38.576 "traddr": "127.0.0.1", 00:35:38.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.576 "adrfam": "ipv4", 00:35:38.576 "trsvcid": "4420", 00:35:38.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.576 "psk": "key0", 00:35:38.576 "method": "bdev_nvme_attach_controller", 00:35:38.576 "req_id": 1 00:35:38.576 } 00:35:38.576 Got JSON-RPC error response 00:35:38.576 response: 00:35:38.576 { 00:35:38.576 "code": -19, 00:35:38.576 "message": "No such device" 00:35:38.576 } 00:35:38.576 21:36:32 -- common/autotest_common.sh@641 -- # es=1 00:35:38.576 21:36:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:35:38.576 21:36:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:35:38.576 21:36:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:35:38.576 21:36:32 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:35:38.576 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:38.576 21:36:32 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:38.576 21:36:32 -- keyring/common.sh@15 -- # local name key digest path 00:35:38.576 21:36:32 -- keyring/common.sh@17 -- # name=key0 00:35:38.576 21:36:32 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:38.576 21:36:32 -- keyring/common.sh@17 -- # digest=0 00:35:38.576 21:36:32 -- keyring/common.sh@18 -- # mktemp 00:35:38.576 21:36:32 -- keyring/common.sh@18 -- # path=/tmp/tmp.lPx7kGrxrI 00:35:38.576 21:36:32 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:38.576 21:36:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:38.576 21:36:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:35:38.576 21:36:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:35:38.576 21:36:32 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:35:38.576 21:36:32 -- nvmf/common.sh@693 -- # digest=0 00:35:38.576 21:36:32 -- nvmf/common.sh@694 -- # python - 00:35:38.576 21:36:32 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lPx7kGrxrI 00:35:38.576 21:36:32 -- keyring/common.sh@23 -- # echo /tmp/tmp.lPx7kGrxrI 00:35:38.576 21:36:32 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lPx7kGrxrI 00:35:38.576 21:36:32 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lPx7kGrxrI 00:35:38.576 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lPx7kGrxrI 00:35:38.833 21:36:32 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:38.833 21:36:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:39.092 nvme0n1 00:35:39.092 21:36:33 -- keyring/file.sh@99 -- # get_refcnt key0 00:35:39.092 21:36:33 -- keyring/common.sh@12 -- # get_key key0 00:35:39.092 21:36:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.092 21:36:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.092 21:36:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.092 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.092 21:36:33 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:35:39.092 21:36:33 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:35:39.092 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:39.350 21:36:33 -- keyring/file.sh@101 -- # get_key key0 00:35:39.350 21:36:33 -- keyring/file.sh@101 -- # jq -r .removed 00:35:39.351 21:36:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.351 21:36:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.351 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.351 21:36:33 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:35:39.351 21:36:33 -- keyring/file.sh@102 -- # get_refcnt key0 00:35:39.351 21:36:33 -- keyring/common.sh@12 -- # get_key key0 00:35:39.351 21:36:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:39.351 21:36:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:39.351 21:36:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:39.351 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.611 21:36:33 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:35:39.611 21:36:33 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:39.611 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:39.611 21:36:33 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:35:39.611 21:36:33 -- keyring/file.sh@104 -- # jq length 00:35:39.611 21:36:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:39.872 21:36:34 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:35:39.872 21:36:34 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lPx7kGrxrI 00:35:39.872 21:36:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lPx7kGrxrI 00:35:40.132 21:36:34 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vVKsuNleJ9 00:35:40.132 21:36:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vVKsuNleJ9 00:35:40.132 21:36:34 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.132 21:36:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:40.392 nvme0n1 00:35:40.392 21:36:34 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:35:40.392 21:36:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:40.651 21:36:34 -- keyring/file.sh@112 -- # config='{ 00:35:40.651 "subsystems": [ 00:35:40.651 { 00:35:40.651 "subsystem": "keyring", 00:35:40.651 "config": [ 00:35:40.651 { 00:35:40.651 "method": "keyring_file_add_key", 00:35:40.651 "params": { 00:35:40.651 "name": "key0", 00:35:40.651 "path": "/tmp/tmp.lPx7kGrxrI" 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "keyring_file_add_key", 00:35:40.651 "params": { 00:35:40.651 "name": "key1", 00:35:40.651 "path": "/tmp/tmp.vVKsuNleJ9" 00:35:40.651 } 00:35:40.651 } 00:35:40.651 ] 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "subsystem": "iobuf", 00:35:40.651 "config": [ 00:35:40.651 { 00:35:40.651 "method": "iobuf_set_options", 00:35:40.651 "params": { 00:35:40.651 "small_pool_count": 8192, 00:35:40.651 "large_pool_count": 1024, 00:35:40.651 "small_bufsize": 8192, 00:35:40.651 "large_bufsize": 135168 00:35:40.651 } 00:35:40.651 } 00:35:40.651 ] 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "subsystem": "sock", 00:35:40.651 "config": [ 00:35:40.651 { 00:35:40.651 "method": "sock_impl_set_options", 00:35:40.651 "params": { 00:35:40.651 "impl_name": "posix", 00:35:40.651 "recv_buf_size": 2097152, 00:35:40.651 "send_buf_size": 2097152, 00:35:40.651 "enable_recv_pipe": true, 00:35:40.651 "enable_quickack": false, 00:35:40.651 "enable_placement_id": 0, 00:35:40.651 "enable_zerocopy_send_server": true, 00:35:40.651 "enable_zerocopy_send_client": false, 00:35:40.651 "zerocopy_threshold": 0, 00:35:40.651 "tls_version": 0, 00:35:40.651 "enable_ktls": false 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "sock_impl_set_options", 00:35:40.651 "params": { 00:35:40.651 "impl_name": "ssl", 00:35:40.651 "recv_buf_size": 4096, 00:35:40.651 "send_buf_size": 4096, 00:35:40.651 "enable_recv_pipe": true, 00:35:40.651 "enable_quickack": false, 00:35:40.651 "enable_placement_id": 0, 00:35:40.651 "enable_zerocopy_send_server": true, 00:35:40.651 "enable_zerocopy_send_client": false, 00:35:40.651 "zerocopy_threshold": 0, 00:35:40.651 "tls_version": 0, 00:35:40.651 "enable_ktls": false 00:35:40.651 } 00:35:40.651 } 00:35:40.651 ] 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "subsystem": "vmd", 00:35:40.651 "config": [] 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "subsystem": "accel", 00:35:40.651 "config": [ 00:35:40.651 { 00:35:40.651 "method": "accel_set_options", 00:35:40.651 "params": { 00:35:40.651 "small_cache_size": 128, 00:35:40.651 "large_cache_size": 16, 00:35:40.651 "task_count": 2048, 00:35:40.651 "sequence_count": 2048, 00:35:40.651 "buf_count": 2048 00:35:40.651 } 00:35:40.651 } 00:35:40.651 ] 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "subsystem": "bdev", 00:35:40.651 "config": [ 00:35:40.651 { 00:35:40.651 "method": "bdev_set_options", 00:35:40.651 "params": { 00:35:40.651 "bdev_io_pool_size": 65535, 00:35:40.651 "bdev_io_cache_size": 256, 00:35:40.651 "bdev_auto_examine": true, 00:35:40.651 "iobuf_small_cache_size": 128, 00:35:40.651 "iobuf_large_cache_size": 16 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "bdev_raid_set_options", 00:35:40.651 "params": { 00:35:40.651 "process_window_size_kb": 1024 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "bdev_iscsi_set_options", 00:35:40.651 "params": { 00:35:40.651 "timeout_sec": 30 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "bdev_nvme_set_options", 00:35:40.651 "params": { 00:35:40.651 "action_on_timeout": "none", 00:35:40.651 "timeout_us": 0, 00:35:40.651 "timeout_admin_us": 0, 00:35:40.651 "keep_alive_timeout_ms": 10000, 00:35:40.651 "arbitration_burst": 0, 00:35:40.651 "low_priority_weight": 0, 00:35:40.651 "medium_priority_weight": 0, 00:35:40.651 "high_priority_weight": 0, 00:35:40.651 "nvme_adminq_poll_period_us": 10000, 00:35:40.651 "nvme_ioq_poll_period_us": 0, 00:35:40.651 "io_queue_requests": 512, 00:35:40.651 "delay_cmd_submit": true, 00:35:40.651 "transport_retry_count": 4, 00:35:40.651 "bdev_retry_count": 3, 00:35:40.651 "transport_ack_timeout": 0, 00:35:40.651 "ctrlr_loss_timeout_sec": 0, 00:35:40.651 "reconnect_delay_sec": 0, 00:35:40.651 "fast_io_fail_timeout_sec": 0, 00:35:40.651 "disable_auto_failback": false, 00:35:40.651 "generate_uuids": false, 00:35:40.651 "transport_tos": 0, 00:35:40.651 "nvme_error_stat": false, 00:35:40.651 "rdma_srq_size": 0, 00:35:40.651 "io_path_stat": false, 00:35:40.651 "allow_accel_sequence": false, 00:35:40.651 "rdma_max_cq_size": 0, 00:35:40.651 "rdma_cm_event_timeout_ms": 0, 00:35:40.651 "dhchap_digests": [ 00:35:40.651 "sha256", 00:35:40.651 "sha384", 00:35:40.651 "sha512" 00:35:40.651 ], 00:35:40.651 "dhchap_dhgroups": [ 00:35:40.651 "null", 00:35:40.651 "ffdhe2048", 00:35:40.651 "ffdhe3072", 00:35:40.651 "ffdhe4096", 00:35:40.651 "ffdhe6144", 00:35:40.651 "ffdhe8192" 00:35:40.651 ] 00:35:40.651 } 00:35:40.651 }, 00:35:40.651 { 00:35:40.651 "method": "bdev_nvme_attach_controller", 00:35:40.651 "params": { 00:35:40.651 "name": "nvme0", 00:35:40.651 "trtype": "TCP", 00:35:40.651 "adrfam": "IPv4", 00:35:40.651 "traddr": "127.0.0.1", 00:35:40.651 "trsvcid": "4420", 00:35:40.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.651 "prchk_reftag": false, 00:35:40.652 "prchk_guard": false, 00:35:40.652 "ctrlr_loss_timeout_sec": 0, 00:35:40.652 "reconnect_delay_sec": 0, 00:35:40.652 "fast_io_fail_timeout_sec": 0, 00:35:40.652 "psk": "key0", 00:35:40.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.652 "hdgst": false, 00:35:40.652 "ddgst": false 00:35:40.652 } 00:35:40.652 }, 00:35:40.652 { 00:35:40.652 "method": "bdev_nvme_set_hotplug", 00:35:40.652 "params": { 00:35:40.652 "period_us": 100000, 00:35:40.652 "enable": false 00:35:40.652 } 00:35:40.652 }, 00:35:40.652 { 00:35:40.652 "method": "bdev_wait_for_examine" 00:35:40.652 } 00:35:40.652 ] 00:35:40.652 }, 00:35:40.652 { 00:35:40.652 "subsystem": "nbd", 00:35:40.652 "config": [] 00:35:40.652 } 00:35:40.652 ] 00:35:40.652 }' 00:35:40.652 21:36:34 -- keyring/file.sh@114 -- # killprocess 1735962 00:35:40.652 21:36:34 -- common/autotest_common.sh@936 -- # '[' -z 1735962 ']' 00:35:40.652 21:36:34 -- common/autotest_common.sh@940 -- # kill -0 1735962 00:35:40.652 21:36:34 -- common/autotest_common.sh@941 -- # uname 00:35:40.652 21:36:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:40.652 21:36:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1735962 00:35:40.652 21:36:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:40.652 21:36:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:40.652 21:36:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1735962' 00:35:40.652 killing process with pid 1735962 00:35:40.652 21:36:34 -- common/autotest_common.sh@955 -- # kill 1735962 00:35:40.652 Received shutdown signal, test time was about 1.000000 seconds 00:35:40.652 00:35:40.652 Latency(us) 00:35:40.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.652 =================================================================================================================== 00:35:40.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.652 21:36:34 -- common/autotest_common.sh@960 -- # wait 1735962 00:35:40.911 21:36:35 -- keyring/file.sh@117 -- # bperfpid=1738577 00:35:40.911 21:36:35 -- keyring/file.sh@119 -- # waitforlisten 1738577 /var/tmp/bperf.sock 00:35:40.911 21:36:35 -- common/autotest_common.sh@817 -- # '[' -z 1738577 ']' 00:35:40.911 21:36:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.911 21:36:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:40.911 21:36:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.911 21:36:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:40.911 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:35:40.911 21:36:35 -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:40.911 21:36:35 -- keyring/file.sh@115 -- # echo '{ 00:35:40.911 "subsystems": [ 00:35:40.911 { 00:35:40.911 "subsystem": "keyring", 00:35:40.911 "config": [ 00:35:40.911 { 00:35:40.911 "method": "keyring_file_add_key", 00:35:40.911 "params": { 00:35:40.911 "name": "key0", 00:35:40.911 "path": "/tmp/tmp.lPx7kGrxrI" 00:35:40.911 } 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "method": "keyring_file_add_key", 00:35:40.911 "params": { 00:35:40.911 "name": "key1", 00:35:40.911 "path": "/tmp/tmp.vVKsuNleJ9" 00:35:40.911 } 00:35:40.911 } 00:35:40.911 ] 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "subsystem": "iobuf", 00:35:40.911 "config": [ 00:35:40.911 { 00:35:40.911 "method": "iobuf_set_options", 00:35:40.911 "params": { 00:35:40.911 "small_pool_count": 8192, 00:35:40.911 "large_pool_count": 1024, 00:35:40.911 "small_bufsize": 8192, 00:35:40.911 "large_bufsize": 135168 00:35:40.911 } 00:35:40.911 } 00:35:40.911 ] 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "subsystem": "sock", 00:35:40.911 "config": [ 00:35:40.911 { 00:35:40.911 "method": "sock_impl_set_options", 00:35:40.911 "params": { 00:35:40.911 "impl_name": "posix", 00:35:40.911 "recv_buf_size": 2097152, 00:35:40.911 "send_buf_size": 2097152, 00:35:40.911 "enable_recv_pipe": true, 00:35:40.911 "enable_quickack": false, 00:35:40.911 "enable_placement_id": 0, 00:35:40.911 "enable_zerocopy_send_server": true, 00:35:40.911 "enable_zerocopy_send_client": false, 00:35:40.911 "zerocopy_threshold": 0, 00:35:40.911 "tls_version": 0, 00:35:40.911 "enable_ktls": false 00:35:40.911 } 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "method": "sock_impl_set_options", 00:35:40.911 "params": { 00:35:40.911 "impl_name": "ssl", 00:35:40.911 "recv_buf_size": 4096, 00:35:40.911 "send_buf_size": 4096, 00:35:40.911 "enable_recv_pipe": true, 00:35:40.911 "enable_quickack": false, 00:35:40.911 "enable_placement_id": 0, 00:35:40.911 "enable_zerocopy_send_server": true, 00:35:40.911 "enable_zerocopy_send_client": false, 00:35:40.911 "zerocopy_threshold": 0, 00:35:40.911 "tls_version": 0, 00:35:40.911 "enable_ktls": false 00:35:40.911 } 00:35:40.911 } 00:35:40.911 ] 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "subsystem": "vmd", 00:35:40.911 "config": [] 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "subsystem": "accel", 00:35:40.911 "config": [ 00:35:40.911 { 00:35:40.911 "method": "accel_set_options", 00:35:40.911 "params": { 00:35:40.911 "small_cache_size": 128, 00:35:40.911 "large_cache_size": 16, 00:35:40.911 "task_count": 2048, 00:35:40.911 "sequence_count": 2048, 00:35:40.911 "buf_count": 2048 00:35:40.911 } 00:35:40.911 } 00:35:40.911 ] 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "subsystem": "bdev", 00:35:40.911 "config": [ 00:35:40.911 { 00:35:40.911 "method": "bdev_set_options", 00:35:40.911 "params": { 00:35:40.911 "bdev_io_pool_size": 65535, 00:35:40.911 "bdev_io_cache_size": 256, 00:35:40.911 "bdev_auto_examine": true, 00:35:40.911 "iobuf_small_cache_size": 128, 00:35:40.911 "iobuf_large_cache_size": 16 00:35:40.911 } 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "method": "bdev_raid_set_options", 00:35:40.911 "params": { 00:35:40.911 "process_window_size_kb": 1024 00:35:40.911 } 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "method": "bdev_iscsi_set_options", 00:35:40.911 "params": { 00:35:40.911 "timeout_sec": 30 00:35:40.911 } 00:35:40.911 }, 00:35:40.911 { 00:35:40.911 "method": "bdev_nvme_set_options", 00:35:40.911 "params": { 00:35:40.911 "action_on_timeout": "none", 00:35:40.911 "timeout_us": 0, 00:35:40.911 "timeout_admin_us": 0, 00:35:40.911 "keep_alive_timeout_ms": 10000, 00:35:40.911 "arbitration_burst": 0, 00:35:40.911 "low_priority_weight": 0, 00:35:40.911 "medium_priority_weight": 0, 00:35:40.911 "high_priority_weight": 0, 00:35:40.911 "nvme_adminq_poll_period_us": 10000, 00:35:40.911 "nvme_ioq_poll_period_us": 0, 00:35:40.911 "io_queue_requests": 512, 00:35:40.911 "delay_cmd_submit": true, 00:35:40.911 "transport_retry_count": 4, 00:35:40.911 "bdev_retry_count": 3, 00:35:40.911 "transport_ack_timeout": 0, 00:35:40.911 "ctrlr_loss_timeout_sec": 0, 00:35:40.911 "reconnect_delay_sec": 0, 00:35:40.912 "fast_io_fail_timeout_sec": 0, 00:35:40.912 "disable_auto_failback": false, 00:35:40.912 "generate_uuids": false, 00:35:40.912 "transport_tos": 0, 00:35:40.912 "nvme_error_stat": false, 00:35:40.912 "rdma_srq_size": 0, 00:35:40.912 "io_path_stat": false, 00:35:40.912 "allow_accel_sequence": false, 00:35:40.912 "rdma_max_cq_size": 0, 00:35:40.912 "rdma_cm_event_timeout_ms": 0, 00:35:40.912 "dhchap_digests": [ 00:35:40.912 "sha256", 00:35:40.912 "sha384", 00:35:40.912 "sha512" 00:35:40.912 ], 00:35:40.912 "dhchap_dhgroups": [ 00:35:40.912 "null", 00:35:40.912 "ffdhe2048", 00:35:40.912 "ffdhe3072", 00:35:40.912 "ffdhe4096", 00:35:40.912 "ffdhe6144", 00:35:40.912 "ffdhe8192" 00:35:40.912 ] 00:35:40.912 } 00:35:40.912 }, 00:35:40.912 { 00:35:40.912 "method": "bdev_nvme_attach_controller", 00:35:40.912 "params": { 00:35:40.912 "name": "nvme0", 00:35:40.912 "trtype": "TCP", 00:35:40.912 "adrfam": "IPv4", 00:35:40.912 "traddr": "127.0.0.1", 00:35:40.912 "trsvcid": "4420", 00:35:40.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:40.912 "prchk_reftag": false, 00:35:40.912 "prchk_guard": false, 00:35:40.912 "ctrlr_loss_timeout_sec": 0, 00:35:40.912 "reconnect_delay_sec": 0, 00:35:40.912 "fast_io_fail_timeout_sec": 0, 00:35:40.912 "psk": "key0", 00:35:40.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:40.912 "hdgst": false, 00:35:40.912 "ddgst": false 00:35:40.912 } 00:35:40.912 }, 00:35:40.912 { 00:35:40.912 "method": "bdev_nvme_set_hotplug", 00:35:40.912 "params": { 00:35:40.912 "period_us": 100000, 00:35:40.912 "enable": false 00:35:40.912 } 00:35:40.912 }, 00:35:40.912 { 00:35:40.912 "method": "bdev_wait_for_examine" 00:35:40.912 } 00:35:40.912 ] 00:35:40.912 }, 00:35:40.912 { 00:35:40.912 "subsystem": "nbd", 00:35:40.912 "config": [] 00:35:40.912 } 00:35:40.912 ] 00:35:40.912 }' 00:35:41.170 [2024-04-23 21:36:35.205519] Starting SPDK v24.05-pre git sha1 3f2c8979187 / DPDK 23.11.0 initialization... 00:35:41.170 [2024-04-23 21:36:35.205639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738577 ] 00:35:41.170 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.170 [2024-04-23 21:36:35.314395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.170 [2024-04-23 21:36:35.403654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.429 [2024-04-23 21:36:35.619640] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:41.688 21:36:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:41.688 21:36:35 -- common/autotest_common.sh@850 -- # return 0 00:35:41.688 21:36:35 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:35:41.688 21:36:35 -- keyring/file.sh@120 -- # jq length 00:35:41.688 21:36:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.948 21:36:36 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:35:41.948 21:36:36 -- keyring/file.sh@121 -- # get_refcnt key0 00:35:41.948 21:36:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.948 21:36:36 -- keyring/common.sh@12 -- # get_key key0 00:35:41.948 21:36:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:41.948 21:36:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.948 21:36:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:41.948 21:36:36 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:41.948 21:36:36 -- keyring/file.sh@122 -- # get_refcnt key1 00:35:41.948 21:36:36 -- keyring/common.sh@12 -- # get_key key1 00:35:41.948 21:36:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:41.948 21:36:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:41.948 21:36:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:41.948 21:36:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:42.207 21:36:36 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:35:42.208 21:36:36 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:35:42.208 21:36:36 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:35:42.208 21:36:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:42.208 21:36:36 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:35:42.208 21:36:36 -- keyring/file.sh@1 -- # cleanup 00:35:42.208 21:36:36 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lPx7kGrxrI /tmp/tmp.vVKsuNleJ9 00:35:42.208 21:36:36 -- keyring/file.sh@20 -- # killprocess 1738577 00:35:42.208 21:36:36 -- common/autotest_common.sh@936 -- # '[' -z 1738577 ']' 00:35:42.208 21:36:36 -- common/autotest_common.sh@940 -- # kill -0 1738577 00:35:42.208 21:36:36 -- common/autotest_common.sh@941 -- # uname 00:35:42.208 21:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:42.208 21:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1738577 00:35:42.470 21:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:35:42.470 21:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:35:42.470 21:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1738577' 00:35:42.470 killing process with pid 1738577 00:35:42.470 21:36:36 -- common/autotest_common.sh@955 -- # kill 1738577 00:35:42.470 Received shutdown signal, test time was about 1.000000 seconds 00:35:42.470 00:35:42.470 Latency(us) 00:35:42.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.470 =================================================================================================================== 00:35:42.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:42.470 21:36:36 -- common/autotest_common.sh@960 -- # wait 1738577 00:35:42.731 21:36:36 -- keyring/file.sh@21 -- # killprocess 1735735 00:35:42.731 21:36:36 -- common/autotest_common.sh@936 -- # '[' -z 1735735 ']' 00:35:42.731 21:36:36 -- common/autotest_common.sh@940 -- # kill -0 1735735 00:35:42.731 21:36:36 -- common/autotest_common.sh@941 -- # uname 00:35:42.731 21:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:42.731 21:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1735735 00:35:42.731 21:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:42.731 21:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:42.731 21:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1735735' 00:35:42.731 killing process with pid 1735735 00:35:42.731 21:36:36 -- common/autotest_common.sh@955 -- # kill 1735735 00:35:42.731 [2024-04-23 21:36:36.913666] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:42.731 21:36:36 -- common/autotest_common.sh@960 -- # wait 1735735 00:35:43.667 00:35:43.667 real 0m11.332s 00:35:43.667 user 0m24.432s 00:35:43.667 sys 0m2.718s 00:35:43.667 21:36:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:43.667 21:36:37 -- common/autotest_common.sh@10 -- # set +x 00:35:43.667 ************************************ 00:35:43.667 END TEST keyring_file 00:35:43.667 ************************************ 00:35:43.667 21:36:37 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:35:43.667 21:36:37 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:35:43.667 21:36:37 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:35:43.667 21:36:37 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:35:43.667 21:36:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:43.667 21:36:37 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:35:43.668 21:36:37 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:35:43.668 21:36:37 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:35:43.668 21:36:37 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:35:43.668 21:36:37 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:35:43.668 21:36:37 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:35:43.668 21:36:37 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:35:43.668 21:36:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:43.668 21:36:37 -- common/autotest_common.sh@10 -- # set +x 00:35:43.668 21:36:37 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:35:43.668 21:36:37 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:35:43.668 21:36:37 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:35:43.668 21:36:37 -- common/autotest_common.sh@10 -- # set +x 00:35:48.945 INFO: APP EXITING 00:35:48.945 INFO: killing all VMs 00:35:48.945 INFO: killing vhost app 00:35:48.945 INFO: EXIT DONE 00:35:51.483 0000:c9:00.0 (144d a80a): Already using the nvme driver 00:35:51.483 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:35:51.483 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:35:51.483 0000:03:00.0 (1344 51c3): Already using the nvme driver 00:35:54.024 Cleaning 00:35:54.024 Removing: /var/run/dpdk/spdk0/config 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:54.024 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:54.024 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:54.024 Removing: /var/run/dpdk/spdk1/config 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:54.024 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:54.025 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:54.025 Removing: /var/run/dpdk/spdk2/config 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:54.025 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:54.025 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:54.025 Removing: /var/run/dpdk/spdk3/config 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:54.025 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:54.025 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:54.286 Removing: /var/run/dpdk/spdk4/config 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:54.286 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:54.286 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:54.286 Removing: /dev/shm/nvmf_trace.0 00:35:54.286 Removing: /dev/shm/spdk_tgt_trace.pid1214870 00:35:54.286 Removing: /var/run/dpdk/spdk0 00:35:54.286 Removing: /var/run/dpdk/spdk1 00:35:54.286 Removing: /var/run/dpdk/spdk2 00:35:54.286 Removing: /var/run/dpdk/spdk3 00:35:54.286 Removing: /var/run/dpdk/spdk4 00:35:54.286 Removing: /var/run/dpdk/spdk_pid1212631 00:35:54.286 Removing: /var/run/dpdk/spdk_pid1214870 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1215749 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1216921 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1217485 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1218737 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1218755 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1219302 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1220780 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1221770 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1222169 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1222805 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1223185 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1223727 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1224015 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1224337 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1224757 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1225246 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1228825 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1229582 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1230095 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1230239 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1231160 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1231181 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1232106 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1232287 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1232737 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1232757 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1233111 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1233385 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1234096 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1234416 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1234792 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1237094 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1238721 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1240581 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1242647 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1244465 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1246529 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1248411 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1250422 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1252324 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1254298 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1256259 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1258179 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1260231 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1262070 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1264156 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1266523 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1268475 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1270402 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1272471 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1274286 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1276241 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1278210 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1280225 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1282140 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1284516 00:35:54.287 Removing: /var/run/dpdk/spdk_pid1288962 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1383654 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1388464 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1398526 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1405231 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1410017 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1410614 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1421852 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1422240 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1427262 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1433856 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1436750 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1448632 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1459034 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1461667 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1462802 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1482440 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1486946 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1492037 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1493841 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1496158 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1496361 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1496543 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1496822 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1497750 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1499823 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1501076 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1501706 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1504384 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1505045 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1505957 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1510827 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1517879 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1523553 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1562273 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1567085 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1575824 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1575831 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1580766 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1580954 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1581250 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1581832 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1581848 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1582945 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1584817 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1586861 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1588703 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1590684 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1592741 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1599299 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1600026 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1601070 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1601817 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1608093 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1611193 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1617698 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1623550 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1631781 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1631787 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1651959 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1653873 00:35:54.575 Removing: /var/run/dpdk/spdk_pid1656228 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1658237 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1663319 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1664481 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1665433 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1666793 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1668507 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1670001 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1671157 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1672522 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1674980 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1685318 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1685347 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1691558 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1694502 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1697265 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1698962 00:35:54.576 Removing: /var/run/dpdk/spdk_pid1702321 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1704642 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1718798 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1720045 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1721551 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1726760 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1727941 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1729157 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1735735 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1735962 00:35:54.881 Removing: /var/run/dpdk/spdk_pid1738577 00:35:54.881 Clean 00:35:54.881 21:36:48 -- common/autotest_common.sh@1437 -- # return 0 00:35:54.881 21:36:48 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:35:54.881 21:36:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:54.881 21:36:48 -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 21:36:49 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:35:54.881 21:36:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:54.881 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:35:54.881 21:36:49 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:35:54.881 21:36:49 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:35:54.881 21:36:49 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:35:54.881 21:36:49 -- spdk/autotest.sh@389 -- # hash lcov 00:35:54.881 21:36:49 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:54.881 21:36:49 -- spdk/autotest.sh@391 -- # hostname 00:35:54.881 21:36:49 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-03 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:35:55.142 geninfo: WARNING: invalid characters removed from testname! 00:36:17.107 21:37:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:17.679 21:37:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:19.067 21:37:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:20.453 21:37:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:21.840 21:37:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:22.786 21:37:16 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:36:24.176 21:37:18 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:24.176 21:37:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:36:24.176 21:37:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:24.176 21:37:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.176 21:37:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.176 21:37:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.176 21:37:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.176 21:37:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.176 21:37:18 -- paths/export.sh@5 -- $ export PATH 00:36:24.176 21:37:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.176 21:37:18 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:36:24.176 21:37:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:24.176 21:37:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713901038.XXXXXX 00:36:24.176 21:37:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713901038.KnfAB1 00:36:24.176 21:37:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:24.176 21:37:18 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:36:24.176 21:37:18 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:36:24.176 21:37:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:24.176 21:37:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:24.176 21:37:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:24.176 21:37:18 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:36:24.176 21:37:18 -- common/autotest_common.sh@10 -- $ set +x 00:36:24.176 21:37:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:36:24.176 21:37:18 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:36:24.176 21:37:18 -- pm/common@17 -- $ local monitor 00:36:24.176 21:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:24.176 21:37:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1751937 00:36:24.176 21:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:24.176 21:37:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1751939 00:36:24.176 21:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:24.176 21:37:18 -- pm/common@21 -- $ date +%s 00:36:24.176 21:37:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1751940 00:36:24.176 21:37:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:24.176 21:37:18 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1751942 00:36:24.176 21:37:18 -- pm/common@26 -- $ sleep 1 00:36:24.176 21:37:18 -- pm/common@21 -- $ date +%s 00:36:24.176 21:37:18 -- pm/common@21 -- $ date +%s 00:36:24.176 21:37:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713901038 00:36:24.176 21:37:18 -- pm/common@21 -- $ date +%s 00:36:24.176 21:37:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713901038 00:36:24.176 21:37:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713901038 00:36:24.176 21:37:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713901038 00:36:24.176 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713901038_collect-cpu-load.pm.log 00:36:24.176 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713901038_collect-bmc-pm.bmc.pm.log 00:36:24.176 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713901038_collect-vmstat.pm.log 00:36:24.176 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713901038_collect-cpu-temp.pm.log 00:36:25.122 21:37:19 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:36:25.122 21:37:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:36:25.122 21:37:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:36:25.122 21:37:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:25.122 21:37:19 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:25.122 21:37:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:25.122 21:37:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:25.122 21:37:19 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:25.122 21:37:19 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:25.122 21:37:19 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:36:25.122 21:37:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:25.122 21:37:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:25.122 21:37:19 -- pm/common@30 -- $ signal_monitor_resources TERM 00:36:25.122 21:37:19 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:36:25.122 21:37:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:25.122 21:37:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:25.122 21:37:19 -- pm/common@45 -- $ pid=1751950 00:36:25.122 21:37:19 -- pm/common@52 -- $ sudo kill -TERM 1751950 00:36:25.383 21:37:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:25.383 21:37:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:25.383 21:37:19 -- pm/common@45 -- $ pid=1751959 00:36:25.383 21:37:19 -- pm/common@52 -- $ sudo kill -TERM 1751959 00:36:25.383 21:37:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:25.383 21:37:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:25.383 21:37:19 -- pm/common@45 -- $ pid=1751951 00:36:25.383 21:37:19 -- pm/common@52 -- $ sudo kill -TERM 1751951 00:36:25.383 21:37:19 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:25.383 21:37:19 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:25.383 21:37:19 -- pm/common@45 -- $ pid=1751954 00:36:25.383 21:37:19 -- pm/common@52 -- $ sudo kill -TERM 1751954 00:36:25.383 + [[ -n 1101605 ]] 00:36:25.383 + sudo kill 1101605 00:36:25.394 [Pipeline] } 00:36:25.412 [Pipeline] // stage 00:36:25.418 [Pipeline] } 00:36:25.435 [Pipeline] // timeout 00:36:25.441 [Pipeline] } 00:36:25.458 [Pipeline] // catchError 00:36:25.464 [Pipeline] } 00:36:25.482 [Pipeline] // wrap 00:36:25.488 [Pipeline] } 00:36:25.504 [Pipeline] // catchError 00:36:25.513 [Pipeline] stage 00:36:25.515 [Pipeline] { (Epilogue) 00:36:25.531 [Pipeline] catchError 00:36:25.532 [Pipeline] { 00:36:25.547 [Pipeline] echo 00:36:25.548 Cleanup processes 00:36:25.554 [Pipeline] sh 00:36:25.843 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:36:25.843 1752483 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:36:25.857 [Pipeline] sh 00:36:26.143 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:36:26.143 ++ grep -v 'sudo pgrep' 00:36:26.143 ++ awk '{print $1}' 00:36:26.143 + sudo kill -9 00:36:26.143 + true 00:36:26.156 [Pipeline] sh 00:36:26.443 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:36.430 [Pipeline] sh 00:36:36.716 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:36.716 Artifacts sizes are good 00:36:36.731 [Pipeline] archiveArtifacts 00:36:36.739 Archiving artifacts 00:36:36.987 [Pipeline] sh 00:36:37.341 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:36:37.356 [Pipeline] cleanWs 00:36:37.367 [WS-CLEANUP] Deleting project workspace... 00:36:37.367 [WS-CLEANUP] Deferred wipeout is used... 00:36:37.375 [WS-CLEANUP] done 00:36:37.377 [Pipeline] } 00:36:37.397 [Pipeline] // catchError 00:36:37.411 [Pipeline] sh 00:36:37.698 + logger -p user.info -t JENKINS-CI 00:36:37.707 [Pipeline] } 00:36:37.724 [Pipeline] // stage 00:36:37.730 [Pipeline] } 00:36:37.746 [Pipeline] // node 00:36:37.752 [Pipeline] End of Pipeline 00:36:37.795 Finished: SUCCESS